notes-computer-jasper-jasperNotes3 2

---

"

mattj 6 hours ago

link

So the issue here is two-fold: - It's very hard to do 'intelligent routing' at scale. - Random routing plays poorly with request times with a really bad tail (median is 50ms, 99th is 3 seconds)

The solution here is to figure out why your 99th is 3 seconds. Once you solve that, randomized routing won't hurt you anymore. You hit this exact same problem in a non-preemptive multi-tasking system (like gevent or golang).

reply

aristus 6 hours ago

link

I do perf work at Facebook, and over time I've become more and more convinced that the most crucial metric is the width of the latency histogram. Narrowing your latency band --even if it makes the average case worse-- makes so many systems problems better (top of the list: load balancing) it's not even funny.

reply

jhspaybar 5 hours ago

link

I can chime in here that I have had similar experiences at another large scale place :). Some requests would take a second or more to complete with the vast majority finishing in under 100MS. A solution was put in place that added about 5 MS to the average request, but also crushed the long tail(it just doesn't even exist anymore) and everything is hugely more stable and responsive.

reply

genwin 22 minutes ago

link

How was 5 ms added? Multiple sleep states per request?

I imagine the long tail disappears in a similar way that a traffic jam is prevented by lowering the speed limit.

reply

harshreality 5 hours ago

link

I seem to recall Google mentioning on some blog several years ago that high variance in response latency degrades user experience much more than slightly higher average request times. I can't find the link though; if anyone has it, I'd be grateful.

reply

nostrademons 5 hours ago

link

Jeff Dean wrote a paper on it for CACM:

http://cacm.acm.org/magazines/2013/2/160173-the-tail-at-scal...

There's a relatively easy fix for Heroku. They should do random routing with a backup second request sent if the first request times fails to respond after a relatively short period of time (say, 95th percentile latency), killing any outstanding requests when the first response comes back in. The amount of bookkeeping required for this is a lot less than full-on intelligent routing, but it can reduce tail latency dramatically since it's very unlikely that the second request will hit the same overloaded server.

reply

badgar 4 hours ago

link

> There's a relatively easy fix for Heroku. They should do random routing with a backup second request sent if the first request times fails to respond after a relatively short period of time (say, 95th percentile latency), killing any outstanding requests when the first response comes back in. The amount of bookkeeping required for this is a lot less than full-on intelligent routing, but it can reduce tail latency dramatically since it's very unlikely that the second request will hit the same overloaded server.

Your solution doesn't work if requests aren't idempotent.

reply

gojomo 4 hours ago

link

A relatively easy fix, for read-only or idempotent requests. Also, if long-tail latency requests wind up being run twice, this technique might accelerate tip-over saturation. Still, this 'hedged request' idea is good to keep in mind, thanks for the pointer.

The 'tied request' idea from the Dean paper is neat, too, and Heroku could possibly implement that, and give dyno request-handlers the ability to check, "did I win the race to handle this, or can this request be dropped?"

reply

fizx 1 hour ago

link

Right now, heroku has one inbound load-balancer that's out of their control (probably ELB(s)). This load balancer hits another layer of mesh routers that heroku does control, and that perform all of herokus magic. In order for "intelligent routing," which is more commonly known as "least-conn" routing to work amongst the mesh layer, all of the mesh routers would have to share state with each other in real-time, which makes this a hard problem.

Alternately, heroku can introduce a third layer between the mesh routers and the inbound random load balancer. This layer consistently hashes (http://en.wikipedia.org/wiki/Consistent_hashing) the api-key/primary key of your app, and sends you to a single mesh router for all of your requests. Mesh routers are/should be blazing fast relative to rails dynos, so that this isn't really a bottleneck for your app. Since the one mesh router can maintain connection state for your app, heroku can implement a least-conn strategy. If the mesh router dies, another router can be automatically chosen.

reply

gleb 4 hours ago

link

From experience, this is an incredibly effective way to DoS? yourself. It was the default behaviour of nginx LB ages ago. Maybe only on EngineYard?. Doesn't really matter as nobody uses nginx LB anymore.

Even ignoring the POST requests problem (yup, it tried to replay those) properly cancelling a request on all levels of a multi-level rails stack is very hard/not possible in practice. So you end up DOSing the hard to scale lower levels of the stack (e.g. database) at the expense of the easy to scale LB.

reply

javajosh 4 hours ago

link

This is something I've read in networked game literature: players react far better to consistent and high latency than to inconsistent and low latency, even if the averages are lower in the latter case. (It might even have been a John Carmack article).

reply

mvgoogler 5 hours ago

link

Sounds like Jeff Dean :-)

http://cacm.acm.org/magazines/2013/2/160173-the-tail-at-scal...

reply

bmohlenhoff 5 hours ago

link

Correct, this matches my observations as well. I'd trade an increase in mean latency for a decrease in worst-case latency anytime. It makes it so much easier to reason about how many resources are needed for a given workload when your latency is bounded.

reply

cmccabe 3 hours ago

link

> Narrowing your latency band --even if it makes the average case worse-- makes so many systems problems better (top of the list: load balancing) it's not even funny.

Yeah, it's a lot more practical than implementing QoS?, isn't it?

reply

dbpatterson 5 hours ago

link

That's probably true, but the value that Heroku is selling (and they charge a lot for it!) is that you _don't_ need to deal with this - that they will balance that out for you.

reply

cmccabe 3 hours ago

link

> The solution here is to figure out why your 99th is 3 seconds. Once you solve that, randomized routing won't hurt you anymore. You hit this exact same problem in a non-preemptive multi-tasking system (like gevent or golang).

The Golang runtime uses non-blocking I/O to get around this problem. "

---

" (incidentally, javascript's security model is so rubbish that it is effectively impossible to prevent code-poisoning from stealing keys, even with a signed extension).

A signed extension doesn't really protect you with JavaScript?. The open security model of the DOM means that you don't have to change the code in question; you can simply run some additional code that installs an event handler in an appropriate place to grab keys as they pass. "

---

some example stuff that ppl complained about in another languag:

http://web.archive.org/web/20060720091555/http://www.cabochon.com/~stevey/sokoban/docs/article-groovy.html

--- ideas from 'release it!'

SOA integration points should always have load balancing, circuit breakers (remember that this server is down/overloaded and stop trying to call it), and timeouts, and maybe handshakes (asking 'can you handle another connection?' before making the connection)

beware of blocking; if you have blocking, always have a timeout!


a subclass should not be able to declare a method synchronized (a blocking critical section) if it is implementing an interface or a superclass, and that method is not synchronized in the interface -- this violates Liskov substitution

other properties to think about for Liskov: side-effect-free-ness; only-throws-these-exceptions-ness

watch out for blocking calls within critical sections

---

timeouts as security capability / a channel like stdin

---

from my Release It! notes;

131 asynchronous connections between systems are more stable (but more substantially complicated) than synchronous ones; e.g. pub/sub message passing is more stable than synchronous request/response protocols like HTTP. a continuum: in-process method calls (a function call into a library; same time, same host, same process), IPC (e.g. shared memory, pipes, semaphores, events; same host, same time, different process), remote procedure calls (XML-RPC, HTTP; same time, different host, different process -- my note -- this continuum neglects the RPC/REST distinction (one part of which is statelessness of HTTP), messaging middleware (MQ, pub-sub, smtp, sms; different time, different host, different process), tuple spaces

---

finally a clear explanation of tuple spaces:

http://software-carpentry.org/blog/2011/03/tuple-spaces-or-good-ideas-dont-always-win.html

they sound great. let's do them.

here's something i wrote about them for the c2.com wiki:

The structure has six primitive operations: put, copy, take, try_copy, try_take, and eval. put places a tuple into the bag. copy finds and reads a tuple. take is like copy but also removes a tuple after reading it. Copy and find block if they cannot find a tuple matching the query; try_copy and try_find are the non-blocking versions. Eval forks a new process.

---

on the note of tuplespaces vs. message passing:

if we have tuplespaces, may as well add transactions to get STM (is this what STM is or is it different?). if we have channels, they should be buffered (async) or buffered like in Go, and have timeouts, and a multicast option -- maybe they should just be something like zeromq.

how do go channels compare to zeromq? https://www.google.com/search?client=ubuntu&channel=fs&q=go+channels+zeromq&ie=utf-8&oe=utf-8

"

    Fast asynchronous message passing.

This is what ZeroMQ? gives you. But it gives you it in a form different to that of Erlang: in Erlang, processes are values and message-passing channels are anonymous; in ZeroMQ?, channels are values and processes are anonymous. ZeroMQ? is more like Go than Erlang. If you want the actor model (that Erlang is based on), you have to encode it in your language of choice, yourself. "

http://www.rabbitmq.com/blog/2011/06/30/zeromq-erlang/

hmm.. seems like we need to support both named processes and named channels, and convenience omissions of either channel or process... hmm..

how does STM compare to tuplespaces?

https://www.google.com/search?client=ubuntu&channel=fs&q=%22software+transactional+memory%22+tuplespaces&ie=utf-8&oe=utf-8

--- the ingredients of Erlang: " Fast process creation/destruction Ability to support » 10 000 concurrent processes with largely unchanged characteristics.

A programming model where processes are lightweight values -- and a good scheduler -- make concurrent programming much easier, in a similar way to garbage collection. It frees you from resource micro-management so you can spend more time reasoning about other things.

    Fast asynchronous message passing.

This is what ZeroMQ? gives you. But it gives you it in a form different to that of Erlang: in Erlang, processes are values and message-passing channels are anonymous; in ZeroMQ?, channels are values and processes are anonymous. ZeroMQ? is more like Go than Erlang. If you want the actor model (that Erlang is based on), you have to encode it in your language of choice, yourself.

    Copying message-passing semantics (share-nothing concurrency).

Notably, Erlang enforces this. In other languages, shared memory and the trap of using it (usually unwittingly) doesn't go away.

    Process monitoring.

Erlang comes with a substantial library, battle-tested over decades, for building highly concurrent, distributed, and fault-tolerant systems. Crucial to this is process monitoring -- notification of process termination. This allows sophisticated process management strategies; in particular, using supervisor hierarchies to firewall core parts of the system from more failure-prone parts of the system.

    Selective message reception.

" http://www.rabbitmq.com/blog/2011/06/30/zeromq-erlang/ (from http://www.zeromq.org/whitepapers:multithreading-magic )

" As far as selective message reception goes, the dual of (Erlang) "from a pool of messages, specify the types to receive" is (Go/ZMQ) "from a pool of channels, specify the ones to select". One message type per channel. "

--

---

mb or mb not look into http://dl.acm.org/citation.cfm?id=1774536


Ruby refinements:

http://www.rubyinside.com/ruby-refinements-an-overview-of-a-new-proposed-ruby-feature-3978.html

" In a nutshell, refinements clear up the messiness of Ruby's monkey patching abilities by letting you override methods within a specific context only. Magnus Holm has done a great write up of the concepts involved, and presents a good example:

module TimeExtensions? refine Fixnum do def minutes; self * 60; end end end

class MyApp? using TimeExtensions?

  def initialize
    p 2.minutes
  endend

MyApp?.new # => 120 p 2.minutes # => NoMethodError? "

refinements are not a property of the value, they are lexically scoped. So you can't easily mimic this with prototype inheritance.

---

if lst is a list value, how to distinguish between things like lst[3] and lst.extend if lst[3] is just lst.__get__(3) and lst extend is just lst.__get__(:extend)? e.g. don't want 'keys lst' to include ':extend'. Obviously, perspectives.

But this means that we can syntactically distinguish indexing into collections from attribute reference, by allowing a syntax like lst[3] to be syntactic sugar for a choice of perspective combined with a __get__.

in addition, this may allow us to do refinements.

---

hmm, actually i think prototype inheritance can help -- the key is that instead of the instance containing a reference to a specific prototype, it contains the name of the type, which is resolved to a prototype based on the lexically scoped type:prototype mapping. hmm maybe this isnt different from classes/instances though, when you put it that way.

anyhow what does this type do? it pre-populates the attributes of the value in an overridable fashion (you might call it a template, although that's different from the C++ usage of that word).


however, i would prefer dynamic scoping to lexical scoping for refinements. i think lexically scoped refinements are merely cosmetic, in that if you refine String::capitalize, then instead of doing that you could just replace every instance of

x.capitalize

in your code with

if x isa String: x.mycapitalize else: x.capitalize

dynamically scoped refinements would be like semiglobals; their impact disappears when you ascend up the call stack past the guy who put the refinement in place


" Simple Generators v. Lazy Evaluation

Oleg Kiselyov, Simon Peyton-Jones and Amr Sabry: Simple Generators:

    Incremental stream processing, pervasive in practice, makes the best case for lazy evaluation. Lazy evaluation promotes modularity, letting us glue together separately developed stream producers, consumers and transformers. Lazy list processing has become a cardinal feature of Haskell. It also brings the worst in lazy evaluation: its incompatibility with effects and unpredictable and often extraordinary use of memory. Much of the Haskell programming lore are the ways to get around lazy evaluation.
    We propose a programming style for incremental stream processing based on typed simple generators. It promotes modularity and decoupling of producers and consumers just like lazy evaluation. Simple generators, however, expose the implicit suspension and resumption inherent in lazy evaluation as computational effects, and hence are robust in the presence of other effects. Simple generators let us accurately reason about memory consumption and latency. The remarkable implementation simplicity and efficiency of simple generators strongly motivates investigating and pushing the limits of their expressiveness.
    To substantiate our claims we give a new solution to the notorious pretty-printing problem. Like earlier solutions, it is linear, backtracking-free and with bounded latency. It is also modular, structured as a cascade of separately developed stream transducers, which makes it simpler to write, test and to precisely analyze latency, time and space consumption. It is compatible with effects including IO, letting us read the source document from a file, and format it as we read. 

This is fascinating work that shows how to gain the benefits of lazy evaluation - decoupling of producers, transformers, and consumers of data, and producing only as much data as needed - in a strict, effectful setting that works well with resources that need to be disposed of once computation is done, e.g. file handles.

The basic idea is that of Common Lisp signal handling: use a hierarchical, dynamically-scoped chain of handler procedures, which get called - on the stack, without unwinding it - to parameterize code. In this case, the producer code (which e.g. reads a file character by character) is the parameterized code: every time data (a character) is produced, it calls the dynamically innermost handler procedure with the data (it yields the data to the handler). This handler is the data consumer (it could e.g. print the received character to the console). Through dynamic scoping, each handler may also have a super-handler, to which it may yield data. In this way, data flows containing multiple transformers can be composed.

I especially like the OCaml version of the code, which is just a page of code, implementing a dynamically-scoped chain of handlers. After that we can already write map and fold in this framework (fold using a loop and a state cell, notably.) There's more sample code.

This also ties in with mainstream yield. By Manuel J. Simoni at 2013-02-21 13:30

Comment viewing options Select your preferred way to display the comments and click "Save settings" to activate your changes. Nice, I was going to post
Fun Paradigms Software Engineering other blogs 4584 reads

Nice, I was going to post this too.

One thing I wondered about in this paper was the obsession with weight vs expressivity. The restrictions adopted by the "simple generator" strategy make generators less powerful. Is a copying single-shot generator really that much slower that these tradeoffs become attractive? By Andy Wingo at Thu, 2013-02-21 15:19

I also liked the repmin
login or register to post comments

I also liked the repmin example, and how the solution was to turn the tree into a stream -- the opposite of the SSAX work Kiselyov did 10 years ago. By Andy Wingo at Thu, 2013-02-21 15:21

The repmin solution
login or register to post comments

The repmin solution presented in the paper still does 2 traversals over the stream, which somewhat defeats the purpose IMHO. The attractiveness of the original repmin is that it does two things in one traversal. By ninegua at Sun, 2013-02-24 21:29

One more traversal...
login or register to post comments

I'm not sure I see the point. If I remember correctly, the lazy repmin builds a thunk structure that exactly mirror the tree. So yes, you need only one traversal to build it, but then forcing a result out of it is isomorphic to a second traversal.

(It's a bit like when you use continuation-passing style to make any program tail-recursive: yes, it's tail-recursive, but the memory consumption corresponding to stack frames for non-tail calls is now needed to build the chain of continuation closures. It's an interesting and transform, but you still have to take into account the memory usage corresponding to continuation closures, just as here you should consider the control pattern corresponding to chained thunks forcing.) By gasche at Sun, 2013-02-24 22:27

Lazy repmin
login or register to post comments

No, in a lazy language repmin can link nodes to a new box for the min value, which isn't known until the whole tree is traversed. By Bruno Martinez at Sun, 2013-02-24 23:46

I didn't read it in detail,
login or register to post comments

I didn't read it in detail, but am I correct in thinking that this cannot express zip? By Jules Jacobs at Thu, 2013-02-21 19:04

Zip
login or register to post comments

I assume by zip, in this case, you mean:

(Producer m x, Producer m y) -> Producer m (x,y)

Inability to zip (or split) streams seems to be a common weakness of many of Haskell's pipe/conduit/iteratee/etc. models. You can partially make up for it by clever use of `Either`. By dmbarbour at Thu, 2013-02-21 19:36

Iteratees can do zip, and even more
login or register to post comments

Simple generators indeed cannot do zip: they can express nested but not parallel loops. Iterators of CLU (which was the first real language with yield) made this limitation clear. I believe the balance of simplicity of implementation vs expressivity is often an illuminating question. For example, we gained a lot of insight investigating a deliberately restricted version of Turing Machines: read-only Turing machines, or Finite State Automata. Primitive recursive functions are another example.

Iteratees and its many variations are in a different league. They do capture and reify a part of their continuation. Therefore, they can do zip. Iteratees can do much more: the ordinary ZipWith? means reading two sources in lockstep. Iteratees can read two sources in arbitrary, statically unknown amounts -- for example, merge two sorted sources:

http://okmij.org/ftp/Streams.html#2enum1iter

One can also distribute one source to several sinks, which is simpler. By Oleg at Sat, 2013-02-23 09:13

"
login or register to post comments

http://okmij.org/ftp/Streams.html

---

http://coffeescript.org/#literate


http://aptiverse.com/blog/css_features/

---

http://lxc.sourceforge.net/

---

Consider bash:

    if [ `which foo` == "/usr/bin/foo" ]; then
      do_stuff
    fi

compared to Ruby:

    if `which foo`.strip == "/usr/bin/foo"
      do_stuff
    end

compared to Python:

    import os
    if os.popen("which foo").read().strip() == "/usr/bin/foo":
      do_stuff()

---

" Having a class model instead of a prototype model also helps—much as I like working with prototypes—because you can easily eliminate late binding (“hash lookups”) and allocations, two of the performance killers mentioned in the slides. The operative word there is easily. You can analyse and JIT all you want, but you cannot feasibly solve at runtime the fundamental constraints imposed by the language. "

" I really like the basic convention in C that you should pass needed allocations in as arguments (and it is virtually always possible) allowing these allocations to be members of of members of other allocations, aggregating allocations so there are much fewer. "

"

pcwalton 1 day ago

link

Related to this is the importance of deforestation. Some good links:

Deforestation is basically eliminating intermediate data structures, which is similar to what the "int(s.split("-", 1)[1])" versus "atoi(strchr(s, '-') + 1)" slides are about. If you consider strings as just lists of characters, then it's basically a deforestation problem: the goal is to eliminate all the intermediate lists of lists that are constructed. (It's something of a peculiar case though, because in order to transform into the C code you need to not only observe that indexing an rvalue via [1] and throwing the rest away means that the list doesn't have to be constructed at all, but you also need to allow strings to share underlying buffer space—the latter optimization isn't deforestation per se.)

I don't know if there's been much effort into deforestation optimizations for dynamic languages, but perhaps this is an area that compilers and research should be focusing on more.

On another minor note, I do think that the deck is a little too quick to dismiss garbage collection as an irrelevant problem. For most server apps I'm totally willing to believe that GC doesn't matter, but for interactive apps on the client (think touch-sensitive mobile apps and games) where you have to render each frame in under 16 ms, unpredictable latency starts to matter a lot.

reply

cwzwarich 1 day ago

link

Automatic deforestation can remove intermediate results in a pipeline of computation, but it can not rewrite a program that is based around the querying / updating of a fixed data structure to use an efficient imperative equivalent throughout.

reply

lucian1900 1 day ago

link

Deforestation is easily done in lazy languages like Haskell.

As for GC, it would be nice to have good real time GCs in runtimes.

reply

dons 1 day ago

link

It has nothing to do with laziness. It has everything to do with the guaranteed absence of side effects.

Deforestation is /more useful/ in strict languages, because allocation of temporary structures costs more. So fusion on strict arrays is better than on lazy streams.

You just can't do it unless you can freely reorder multiple loops, and to do that you need a proof there are no side effects. Haskell just makes that trivial.

reply

klodolph 1 day ago

link

> As for GC, it would be nice to have good real time GCs in runtimes.

After decades of GC research, I think the conclusion is, "Yeah, that would be nice." Current state of the art gives us some very nice GCs that penalize either throughput or predictability. One of my favorite stories about GC is here:

http://samsaffron.com/archive/2011/10/28/in-managed-code-we-...

reply

gngeal 11 hours ago

link

"Deforestation is easily done in lazy languages like Haskell."

You can also do it in stream-based or data-flow-based languages. Or in pretty much any DSL you decide to implement, if the semantics of the language itself is reasonable.

reply "

---

" irahul 15 hours ago

link

Mike Pall of luajit fame has an interesting take on it.

http://www.reddit.com/r/programming/comments/19gv4c/why_pyth...

<quote>

While I agree with the first part ("excuses"), the "hard" things mentioned in the second part are a) not that hard and b) solved issues (just not in PyPy?).

Hash tables: Both v8 and LuaJIT? manage to specialize hash table lookups and bring them to similar performance as C structs (1). Interestingly, with very different approaches. So there's little reason NOT to use objects, dictionaries, tables, maps or whatever it's called in your favorite language.

(1) If you really, really care about the last 10% or direct interoperability with C, LuaJIT? offers native C structs via its FFI. And PyPy? has inherited the FFI design, so they should be able to get the same performance someday. I'm sure v8 has something to offer for that, too.

Allocations: LuaJIT? has allocation sinking, which is able to eliminate the mentioned temporary allocations. Incidentally, the link shows how that's done for a x,y,z point class! And it works the same for ALL cases: arrays {1,2,3} (on top of a generic table), hash tables {x=1,y=2,z=3} or FFI C structs.

String handling: Same as above -- a buffer is just a temporary allocation and can be sunk, too. Provided the stores (copies) are eliminated first. The extracted parts can be forwarded to the integer conversion from the original string. Then all copies and references are dead and the allocation itself can be eliminated. LuaJIT? will get all of that string handling extravaganza with the v2.1 branch -- parts of the new buffer handling are already in the git repo. I'm sure the v8 guys have something up their sleeves, too.

I/O read buffer: Same reasoning. The read creates a temporary buffer which is lazily interned to a string, ditto for the lstrip. The interning is sunk, the copies are sunk, the buffer is sunk (the innermost buffer is reused). This turns it into something very similar to the C code.

Pre-sizing aggregates: The size info can be backpropagated to the aggreagate creation from scalar evolution analysis. SCEV is already in LuaJIT? (for ABC elimination). I ditched the experimental backprop algorithm for 2.0, since I had to get the release out. Will be resurrected in 2.1.

Missing APIs: All of the above examples show you don't really need to define new APIs to get the desired performance. Yes, there's a case for when you need low-level data structures -- and that's why higher-level languages should have a good FFI. I don't think you need to burden the language itself with these issues.

Heuristics: Well, that's what those compiler textbooks don't tell you: VMs and compilers are 90% heuristics. Better deal with it rather than fight it.

tl;dr: The reason why X is slow, is because X's implementation is slow, unoptimized or untuned. Language design just influences how hard it is to make up for it. There are no excuses.

</quote>

Also interesting is his research on allocation sinking:

http://wiki.luajit.org/Allocation-Sinking-Optimization

reply "

" The problem is that we haven't yet implemented those abstraction layers in this smart way - for example, Haskell can implement 'fusion' of multiple string operations so that they are merged together and executed without intermediate copies; and the abstraction layer for that is exactly as high-level as the Python examples in original poster's slides. Sure, it's objectively hard to change core Python like that - but it theoretically can be done, so it should&will be done. "

"

Zak 1 day ago

link

The creators of Common Lisp knew what Alex is talking about. Lisp is, of course just as dynamic as Ruby, Python or Javascript, but it exposes lower-level details about data structures and memory allocation iff the programmer wants them.

Features that come to mind include preallocated vectors (fixed-size or growable), non-consing versions of the standard list functions and the ability to bang on most any piece of data in place. There are fairly few situations in which a CL program can't come within a factor of 2 or 3 of the performance of C.

"

"

riobard 1 day ago

link

Completely agree. APIs are so important for many optimizations to pull off.

I'd really like to use a lot more buffer()/memoryview() objects in Python. Unfortunately many APIs (e.g. sockets) won't work well with them (at least in Python 2.x. Not sure about 3.x).

So we ended up with tons of unnecessary allocation and copying all over the place. So sad.

reply "

https://github.com/edn-format/edn

---

bitcoin script as an example of a subturing language


i was bitten by Python's "resist the temptation to guess" w/r/t using a print command within an exception handler, which caused a unicode error because there was a character in a unicode variable which could not be represented in my current locale

my friend p.r. was bitten by Python's "resist the temptation to guess" when trying to serialize a simple data structure to JSON, because the structure contained a DateTime? and JSON formally doesnt have DateTime?


my friend p.r. feels that Python's trouble with dealing with multiple Python versions on the same machine (and hence, the trouble with getting a bunch of developers onboarded, or especially the trouble when two dev teams have been working with different Python versions, and now their two products need to interact) is even worse that Ruby and is the worst in any language he uses.


" Just saw this milestone by Run Revolution's LiveCode? - it's the main living branch I'm aware of from the Hypercard/Supercard/Oracle Media Objects tree. Has anyone here used LiveCode?? Are there other intuitive, card-based tools around for fast prototyping? "

http://www.runrev.com/developers/lessons-and-tutorials/tutorials/

---

probs with C:

https://docs.google.com/presentation/d/1h49gY3TSiayLMXYmRMaAEMl05FaJ-Z6jDOWOz3EsqqQ/preview?usp=sharing&sle=true#slide=id.p

---

"

pjscott 3 hours ago

link

Nice rant, and I mostly agree with it, but there are a few slow things about the ways that some languages are usually implemented that don't really have much to do with over-engineering. Let's take this line of Python code, for example, and look at how the CPython interpreter runs it:

    return x + 42

This turns into the following bytecode instructions:

    LOAD_FAST       0 (x)
    LOAD_CONST      1 (42)
    BINARY_ADD
    RETURN_VALUE

First we get the x variable from the local namespace. This is an array of PyObject? pointers; the LOAD_FAST oepration simply does an array lookup and puts the result on the stack, incrementing the reference count. Pretty fast. Next is LOAD_CONST, which is even faster; it takes an already-allocated PyObject? and puts a pointer to it on the stack, incrementing the reference count. BINARY_ADD removes two numbers from the stack, dereferences the pointers to get the integer values, allocates a new PyObject? with the resulting integer inside, bumps up its reference count, decrements the reference count of the two operands, and pushes the result on the stack. Finally, RETURN_VALUE jumps back to the caller of the function.

In Go, the corresponding code would be compiled to two, maybe three machine code instructions. There are good reasons why Python does it this way, but it does suffer some inherent slowdown.

reply

kevingadd 3 hours ago

link

A more interesting comparison would be the PyPy?-generated native code for the expression versus the compiled code from Go. Comparing an interpreter with a compiler is generally uninteresting; the compiler almost always wins.

If you look at the code that comes out of various modern JITs you often will find really interesting differences in the native code they produce, even for simple constructs like arithmetic. Type checks, barriers, bailouts, etc - in addition to more mundane differences, like one using SSE to do floating point and the other using x87, and one JIT having a better register allocator.

reply

ralph 3 hours ago

link

> dereferences the pointers to get the integer values

Doesn't it find a method of x that implements addition?

reply

pjscott 2 hours ago

link

Nope! There's a branch in the interpreter that checks if both operands are Python's built-in int objects. If they are, and the result can fit in a C int without overflow, then the interpreter adds the numbers directly. This is by far the most common case. "


unfortunately, pandas cannot be a backend in the Python implementation:

Date: Fri, 15 Mar 2013 19:16:28 -0700 From: y-p To: pydata/pandas Cc: Bayle Shanks Subject: Re: [pandas] DataFrame?.copy(), at least, should be threadsafe (#2728)

Right now, pandas is explicitly not thread-safe. Taking any step down this path will inevitably generate lots of pain and changes all over. Python threads see more limited use then in other languages, the upside is correspondingly limited.

You can always implement per-object or a global pandas lock in your own code, if threads are what you want.

Pushing back to 0.12, at least.

--- Reply to this email directly or view it on GitHub?: https://github.com/pydata/pandas/issues/2728#issuecomment-14997831


https://www.dropbox.com/s/xknbe58zcvjhzhv/PyCon2013.pptx



i finally found an explanation that makes sense:

https://news.ycombinator.com/item?id=5401324

" fzzzy 1 day ago

link

From a certain perspective it is a rational decision. Because the CPython API relies so heavily on the C stack, either some platform-specific assembly is required to slice up the C stack to implement green threads, or the entire CPython API would have to be redesigned to not keep the Python stack state on the C stack.

Way back in the day [1] the proposal for merging Stackless into mainline Python involved removing Python's stack state from the C stack. However there are complications with calling from C extensions back into Python that ultimately killed this approach.

After this Stackless evolved to be a much less modified fork of the Python codebase with a bit of platform specific assembly that performed "stack slicing". Basically when a coro starts, the contents of the stack pointer register are recorded, and when a coro wishes to switch, the slice of the stack from the recorded stack pointer value to the current stack pointer value is copied off onto the heap. The stack pointer is then adjusted back down to the saved value and another task can run in that same stack space, or a stack slice that was stored on the heap previously can be copied back onto the stack and the stack pointer adjusted so that the task resumes where it left off.

Then around 2005 the Stackless stack slicing assembly was ported into a CPython extension as part of py.lib. This was known as greenlet. Unfortunately all the original codespeak.net py.lib pages are 404 now, but here's a blog post from around that time that talks about it [2].

Finally the relevant parts of greenlet were extracted from py.lib into a standalone greenlet module, and eventlet, gevent, et cetera grew up around this packaging of the Stackless stack slicing code.

So you see, using the Stackless strategy in mainline python would have either required breaking a bunch of existing C extensions and placing limitations on how C extensions could call back into Python, or custom low level stack slicing assembly that has to be maintained for each processor architecture. CPython does not contain any assembly, only portable C, so using greenlet in core would mean that CPython itself would become less portable.

Generators, on the other hand, get around the issue of CPython's dependence on the C stack by unwinding both the C and Python stack on yield. The C and Python stack state is lost, but a program counter state is kept so that the next time the generator is called, execution resumes in the middle of the function instead of the beginning.

There are problems with this approach; the previous stack state is lost, so stack traces have less information in them; the entire call stack must be unwound back up to the main loop instead of a deeply nested call being able to switch without the callers being aware that the switch is happening; and special syntax (yield or yield from) must be explicitly used to call out a switch.

But at least generators don't require breaking changes to the CPython API or non-portable stack slicing assembly. So maybe now you can see why Guido prefers it.

Myself, I decided that the advantages of transparent stack switching and interoperability outweighed the disadvantages of relying on non-portable stack slicing assembly. However Guido just sees things in a different light, and I understand his perspective.

  [1] http://www.python.org/dev/peps/pep-0219/
  [2] http://agiletesting.blogspot.com/2005/07/py-lib-gems-greenlets-and-pyxml.html"

thank you, fzzzy!!!

for me, this strengthens my that a parallel handler tree and delimited continuations, replacing a linear stack, should be fundamental to a new programming language.


http://sdiehl.github.com/gevent-tutorial/


the disadvantages of callback-style programming:

http://technicae.cogitat.io/2012/03/conversation-with-guido-about-callbacks.html

(basically, it's harder to read -- many people seem to think it's harder to read, few agree with this blogger who finds it easier to read)


and the problem with coroutines:

"

bayle: if i am understanding the problem correctly (i may not be), this seems easy enough to solve: provide a special way of calling a function (could be just a function that takes a function to call as a parameter) that says 'don't switch out of the call stack under this context without getting my permission first', e.g. if f() special-calls g() with f2() as the permission-granter, and g() tries to yield control (e.g. as a context-switch yield, not as a generator), then rather than actually yielding, the runtime calls f2(), which might lock some stuff etc, and if and only if f2() returns True do we actually yield (o/w we just go back into g() immediately).

i guess Python does this in reverse: the default case is not to allow sub-yields, but you can use 'yield from' to explicitly allow it: http://www.python.org/dev/peps/pep-0380/

" The weakness of callbacks is the lack of multicore support. It is not pure speculation that gevent simply won't work in a preemptive environment if it wants to utilize multiple cores. The sentence right about this makes my point:

> yet due to it being a networking library it is unlikely gevent users will have to use the semaphore and locking primitives provided with gevent.

Why is that? It's because gevent context switches on I/O events, not preemptively, so you have some control over when context switches happen. You can depend on 'my_dict[foo] += 1' to happen atomically. If you want to utilize multiple cores with gevent, as code using it is written now, you can't because all of the code out there depend on operations between I/O events happening atomically. This isn't a secret in the callback/gevent world. On the contrary, it's presented as a selling point. No longer worry about those locking primitives those real threading libraries give you because we are single threaded. (Ocaml/Lwt has the same problem). " -- https://news.ycombinator.com/item?id=3752213


ok, it seems like this is another hierarchy of control vs flexibility. The less flexible, the easier it is to write a program.

The most controlled, least flexible is just to write a single threaded program, with no coroutines or generators or anything, blocking on I/O, etc. The downside is that you waste a lot of time blocking, and you can't respond with low-latency to events.

The most flexible is pre-emptive multitasking with shared memory. The downside is that you must write all of your program to accomodate potential accesses by other threads to your data interleaved with your own accesses.

In between these extremes, we have levels of execution concurrency control (preemptive multitasking -> preemptive multitasking with critical sections (sections of code in which you cannot be preempted), synchronized methods (only one of which can be executing at a time for a given object, which may have private variables) -> cooperative multitasking instead of preemptive, where all sections of your code are critical except for yield-of-control statements, but subroutines may contain yield-of-control statements that you don't know about -> no coroutines but only Python-style generators, where subroutines do not contain yield-of-control statements that you don't know about -> plain single threading).

We also have levels and methods of memory isolation. e.g. STM, processes which don't share memory except for via channels/queues, etc. And, not quite fitting in with either execution control or memory isolation (which is maybe why they are so difficult) are things like locking, which require thinking about the flow of control as it related to the access of specific memory locations.

This sort of hierarchy seems to occur a lot. We also have a type system strictness hierarchy (it's easiest to just write a program that works for things of a certain type, but generic containerst are more flexible), a hierarchy on how much control vs speed to have (it's easiest to not worry about boxed vs. unboxed arrays, about which operations make copies and which pass pointers, but you can write faster code if you're willing to worry about these things), a hierarchy on memory management (it's easiest to let the garbage collector take care of it but it's faster and better for interfacing with outside processes to do it manually), a hierarchy of laziness (it's simpler to write strict code (you don't have to worry about foldl vs. foldr) but more flexible to write lazy code), a hierarchy of control flow forms (goto and continuations and tail-call optimization are the most general, but it's simpler to make everything a for loop or a while loop), a hierarchy of metaprogrammability (the most expressive is that the language can be completely redefined -> syntax cannot be redefined -> reserved words and builtins cannot be redefined -> no macros is easiest to read).

languages like Rust let you explictly move between levels of some of these hierarchies (e.g. memory management; http://static.rust-lang.org/doc/tutorial.html#ownership thru http://static.rust-lang.org/doc/tutorial.html#borrowed-pointers ; mb see also http://pcwalton.github.com/blog/2013/03/18/an-overview-of-memory-management-in-rust/). i tend to favor that approach.

when a language does not recognize some of these levels, then you have to worry about things like: is this library threadsafe? do i have to worry about deadlocks if i call this library in some convoluted manner and then another library i call also calls it unexpectedly? is it possible for this library to actually crash (e.g. if you are in Python and the library calls out to C, it could actually crash, rather than just throwing an exception; this is pretty rare; but if you are in C, using a C library, you might have to worry more)? does this library have a memory leak? does this library use non-blocking I/O? will importing * from this library redefine builtins? will calling this library that uses continuations cause a non-local exit that will bypass my finalizers? The best would be a language in which, for any such question that you can think of, you can assume "the good answer" for any library module unless there is some indication otherwise. E.g. by default libraries are threadsafe, can't contribute to deadlocks, use non-blocking I/O, do not redefine builtins, do not bypass finalizers in the caller.

A generic way to indicate the compliance of a module with stuff like this (since more will doubtlessly be invented after the language) should be included in the module-level part of the type system (i still havent gotten around to checking out ML/SML's module type system btw). (todo mb read http://existentialtype.wordpress.com/2011/04/16/modules-matter-most/ )

note that often this sort of thing is the reason people write new programming languages; not because they need a more expressive core semantics or syntax, but because they want to create an ecosystem of libraries within which the users can rely that certain standards of the sort above are met, and furthermore that certain conventions (e.g. "Pythonic") are adhered to.

within languages you have 'framework' ecosystems, which allow the assumption of more standards by modules (just like theorems in, e.g. linear algebra, can assume more premises than more general theories such as modern algebra); e.g. within Django you can assume access to a database via Django's ORM.


took a very brief look at http://existentialtype.wordpress.com/2011/04/16/modules-matter-most/ and i agree:

" More disappointingly, for me at least, is that even relatively enlightened languages, such as Haskell or F#, fail to support this basic machinery. In Haskell you have type classes, which are unaccountably popular (perhaps because it’s the first thing many people learn). There are two fundamental problems with type classes. The first is that they insist that a type can implement a type class in exactly one way. For example, according to the philosophy of type classes, the integers can be ordered in precisely one way (the usual ordering), but obviously there are many orderings (say, by divisibility) of interest. The second is that they confound two separate issues: specifying how a type implements a type class and specifying when such a specification should be used during type inference. As a consequence, using type classes is, in Greg Morrisett’s term, like steering the Queen Mary: you have to get this hulking mass pointed in the right direction so that the inference mechanism resolves things the way you want it to. "


need to have a way to generically define things like that Milewski's ownership trees in code

how is this different from a macro-y thing? it's not recursing on the AST of the code, it's recursing through data

how it this different from a normal runtime recursion through data? it's part of typechecking

---

"

vidarh 18 hours ago

link

Topaz won't help for the case you describe.

The big problem with bundle exec (or Ruby startup times in general), is the ludicrous amount filesystem access because of searching through a ridiculous large number of files for each "require". E.g. if you have a ton of gem's installed, most "require" calls will look for the files you require relative to the root of every gem....

Much more extensive use of require_relative, and fewer search paths can fix that entirely.

Try an "strace 2>&1 -e last64,stat64,lstat,stat bundle exec [yourcommand]

less" and be prepared to be shocked at the waste.

(EDIT: This of course assumes you have strace; on most Linux distro's that's just a package install away - I don't know about OS X, and I've got no idea how to make dtrace do the same) "

"

Freaky 8 hours ago

link

You're talking about startup time, which is bad in this case because of the overhead Rubygems introduces by adding loads of directories to the require load path. Pre-rubygems, everything lived in a traditional $PREFIX/lib/ruby/{,site-ruby/}$VERSION/ directory just like Perl and Python, and startup times were much more comparable.

Atwood is talking about general runtime performance, and comparing it with the fast, mature JITed VMs that drive CLR languages. Python (and indeed other languages in its class) compare just as badly there. "

" mmahemoff 16 hours ago

link

Jeff is really comparing the wrong things here. Whether Ruby, Python, or even ASP.net, it's all asking for Discourse to be a niche product only used sparingly by enthusiasts and in the enterprise.

Which was maybe the goal, but I don't think so.

The obvious choice IMO was to suck it up and live with PHP. Nowhere near as nice as the other options, but exponentially less friction for people to set up on a LAMP stack. I'm not going to all-out defend PHP, but it's not as bad as it used to be; and you don't have to use it as much as you used to, given that Discourse is heavily a client-side JavaScript? app anyway. "

---

Go 1.1 now implements method values, which are functions that have been bound to a specific receiver value. For instance, given a Writer value w, the expression w.Write, a method value, is a function that will always write to w; it is equivalent to a function literal closing over w:

func (p []byte) (n int, err error) { return w.Write(p) }

Method values are distinct from method expressions, which generate functions from methods of a given type; the method expression (*bufio.Writer).Write is equivalent to a function with an extra first argument, a receiver of type (*bufio.Writer):

func (w *bufio.Writer, p []byte) (n int, err error) { return w.Write(p) }

---

namespaces for annotations (like for semweb stuff)

  of course perspectives are like namespaces, so just use perspectives for annotations

---

what does XML have over JSON?

schemas

(also extensible types?)



davidw 11 hours ago

link

I'm interested in tracking Go as a replacement for Erlang. Some things that should probably happen:

I think they'll get there eventually. Maybe not 100%, but 'good enough' sooner or later.

reply

jerf 7 hours ago

link

Right now, as near as I can tell, you basically can't implement an Erlang supervision tree in Go. You don't have anything like linking, which really has to be guaranteed by the runtime to work properly at scale, so bodging something together with some 'defer' really doesn't cut it. You also can't talk about "a goroutine", because you can't get a reference or a handle to one (no such thing), you can only get channels, but they aren't guaranteed to map in any particular way to goroutines.

I've done Erlang for years and Go for weeks, so I'm trying to withhold judgement, but I still feel like Erlang got it more right here; it's way easier to restrict yourself to a subset of Erlang that looks like a channel if that is desirable for some reason than to implement process features with channels in Go.

reply

JulianMorrison? 8 hours ago

link

That issue is quite simply a misinterpretation of goroutines.

Erlang: perform a maximum number of reductions, then switch, or switch on IO. The number of reductions is cleverly adjusted so that a process which is swamping other processes will be throttled.

Go: switch on IO.

Go's design is much simpler, and closer to Ruby Fibers than to Erlang processes, except that goroutine scheduling can use multiple threads. To cooperatively switch without doing IO, call runtime.Gosched().

reply

trailfox 10 hours ago

link

Akka is also a viable Erlang alternative.

reply

waffle_ss 6 hours ago

link

Looks like a nice library but I don't think it's a serious contender to replace Erlang because the JVM just isn't made for the level of concurrency that Erlang's VM is. Off the top of my head as an Erlang newbie:

[1]: http://doc.akka.io/docs/akka/snapshot/general/jmm.html

reply

---

GhotiFish? 6 hours ago

link

Whenever I look through golangs specs, I always get stuck on the same question.

Why are the methods we define restricted to the types we define? I'm SURE there is a good reason.

Others have said that it's because if you did allow that kind of thing to happen, you might get naming collisions in packages. I don't buy this argument, you could get naming collisions anyway from packages, Go resolves those issues by aliasing. Go also allows for polymorphic behavior by packaging the actual type of the method caller with its actual value, so resolving which method to use isn't any more complicated.

I don't get it, I'm sure there's a good reason! I just hope it's a good enough reason to throw out the kind of freedom that would allow you.

reply

Jabbles 6 hours ago

link

You can't add methods to a type in a different package as you might break that package. By locking the method set of types when a package is compiled you provide some guarantee of what it does, and it doesn't need to get recompiled again! This is central to Go's ideals of compiling fast.

Besides, embedding a type is very easy http://golang.org/ref/spec#Struct_types

reply

GhotiFish? 6 hours ago

link

Hmmm. I can see how adding methods to a type in a different package would require that package to be recompiled, but I don't see how I could break that package. Unless there's some reflection magic I'm not considering.

I'm reading through the embedded types now. I am new to golang so this one is lost on me. I thought if you wanted your own methods on a type from another package, you just aliased it with your own type def.

though it looks like there's some kinda prototyping behavior being described here?

    If S contains an anonymous field T, the method sets of S and *S both include
    promoted methods with receiver T. The method set of *S also includes promoted 
    methods with receiver *T.
    
    If S contains an anonymous field *T, the method sets of S and *S both include 
    promoted methods with receiver T or *T.

reply

Jabbles 5 hours ago

link

For example, if you do a runtime type assertion to see if a variable satisfies an interface:

    v, ok := x.(T)

http://golang.org/ref/spec#Type_assertions

If you "monkey-patch" x to satisfy T in another package, the value of ok may change.

reply

GhotiFish? 5 hours ago

link

Hmm I can kind of see what you mean, but I don't see it as a big of a problem.

If you make package A depend on package B, package B monkey-patches x with method Foo so now x is a Fooer

x now satisfies the Fooer interface in package A, well that seems ok. You imported B after all. In things that don't import B, x doesn't satisfy Fooer. Is this unexpected behavior? If B depends on C, C's x won't satisfy Fooer right?

reply

enneff 4 hours ago

link

It's because Go was designed for large code bases, and doing what you describe leads to unmaintainable code.

You might be interested in this document: http://talks.golang.org/2012/splash.article

reply

pcwalton 5 hours ago

link

Because if you had package A that defined type T, package B that defined method Foo on T and, separately, package C that defined method Foo on T, then B and C could not be linked together.

reply

GhotiFish? 5 hours ago

link

I don't see why, personally, B defines Foo on T, so whenever B uses Foo, it should use B's Foo. C defines Foo on T, so whenever C uses Foo, it should use C's Foo. why is it ambiguous for the compiler which one to use?

reply

pcwalton 5 hours ago

link

If package D linked against B and C and called Foo(), which Foo() would it call?

reply

GhotiFish? 5 hours ago

link

Good question. If package B defined the function Bar(), and package C defined the function Bar(), then if package D linked to packages B and C, which function should it call when it asks for Bar()?

Naming collisions are a solved problem.

reply

pcwalton 5 hours ago

link

You don't have to import methods in Go; the compiler just finds them. They're in a per-type namespace and therefore would be vulnerable to collisions. Of course, they could change the language to allow/require you to import methods, but that would add some complexity.

On the other hand, you have to import functions, so your example isn't a problem.

reply

mseepgood 4 hours ago

link

In Go each external function call is preceded by its package name to avoid collisions, but you don't have packages within the method namespace of a type.

reply

---

An Erlang process has its own heap, so when it blows up, its state just goes away, leaving your remaining program's state untouched. With Go, there is no way to recover sanely; even if your goroutines are designed to copy memory, Go itself has a single heap.

Now, this is a very odd design decision for a language that claims it's designed for reliability. Perhaps Go's authors thinks it's better just for the entire program to die if a single goroutine falls over; well, that's one way, but it's a crude one. Erlang's design is simply better.

I wonder if Go can ever adopt per-goroutine heaps, or whether it's too late at this stage. I was happy to see that Rust has chosen to follow Erlang's design by having per-task heaps, even if all the pointer mechanics (three times of pointers, ownership transfer, reference lifecycles and so forth) result in some fairly intrusive and gnarly syntax.

reply

masklinn 1 day ago

link

> Go allows you to share memory between goroutines (i.e. concurrent code).

Go will share memory, by default, and special attention must be taken preventing or avoiding it. It's not an allowance.

> In fact, the Go team explicitly tells you not to do that

And yet they have refused to implement a correct model, even though they have no problem imposing their view when it fits them (and having the interpreter get special status in breaking them, see generics).

reply

burntsushi 1 day ago

link

> Go will share memory, by default, and special attention must be taken preventing or avoiding it.

Not really. If you use channels to communicate between goroutines, then the concurrency model is that of sequential processes, even if channels are implemented using shared memory under the hood.

That is, the default concurrency model militated by Go is not shared memory, but that of CSP. It's disingenuous to affix Go with the same kind of concurrency model used in C.

> And yet they have refused to implement a correct model

What is a correct model? Erlang's model isn't correct. It's just more safe.

> (and having the interpreter get special status in breaking them, see generics)

What's your point? Purity for purity's sake?

reply

masklinn 19 hours ago

link

> Not really. If you use channels to communicate between goroutines, then the concurrency model is that of sequential processes

Except since Go has support for neither immutable structures not unique pointers, the objects passed through the channel can be mutable and keep being used by the sender. Go will not help you avoid this.

> That is, the default concurrency model militated by Go is not shared memory, but that of CSP. It's disingenuous to affix Go with the same kind of concurrency model used in C.

It's not, go passes mutable objects over its channel and all routines share memory, you get the exact same model by using queues in C.

--- " I completely agree. As someone who works on a parallel functional language, it's very hard to sell a parallel language that isn't as fast as parallel fortran or hand-tuned C code that uses pthreads and the fastest parallel implementation of BLAS and other libraries. "


generics


destructors


reflection

---

"

--- what you really want to control memory use in a lazy language is:

and similarly for memory allocation/garbage collection in general:

---

" Getting the result out of a callback- or event-based function basically means “being in the right place at the right time”. If you bind your event listener after the result event has been fired, or you don’t have code in the right place in a callback, then tough luck, you missed the result. This sort of thing plagues people writing HTTP servers in Node. If you don’t get your control flow right, your program breaks.

Promises, on the other hand, don’t care about time or ordering. You can attach listeners to a promise before or after it is resolved, and you will get the value out of it. Therefore, functions that return promises immediately give you a value to represent the result that you can use as first-class data, and pass to other functions. There is no waiting around for a callback or any possibility of missing an event. As long as you hold a reference to a promise, you can get its value out. "

---

http://dfellis.github.com/queue-flow/2012/09/22/why-queue-flow/

---

semiglobal mutability

e.g. to avoid having state, functional programming ends up explicitly passing it during 'reduce'ish operations, e.g.:

def white_borders(flatpaint): def whiteIfNeighborsDiffer(state, centerVal, neighborVals, i, j, flatpaint): differentNeighborVals = set(filter(lambda x: x != centerVal, neighborVals)) if len(differentNeighborVals) > 0: state[i,j] = inf return state

    return walk_borders(flatpaint, whiteIfNeighborsDiffer, flatpaint.copy())

def walk_borders(flatpaint, f, initState): state = initState

    for i in range(shape(flatpaint)[0]):
        for j in range(shape(flatpaint)[1]):
            centerVal = flatpaint[i,j]
            neighborLocs = filter(lambda x: x[0] >= 0 and x[1] >= 0 and x[0] < shape(flatpaint)[0] and x[1] < shape(flatpaint)[1], [(i-1,j-1), (i-1,j), (i-1,j+1), (i,j-1), (i,j+1), (i+1,j-1), (i+1,j), (i+1,j+1), ])
            neighborVals = [flatpaint[neighborLoc] for neighborLoc in neighborLocs]
            state = f(state, centerVal, neighborVals, i, j, flatpaint)
    return state

the mutable way of doing this would be to just let whiteIfNeighborsDiffer mutate the state, without explicitly returning it. By itself, this isn't dangerous, since the only person to see that mutation is the next call of whiteIfNeighborsDiffer, and calls to that occur in a single, serialized, linear line, so its easy to think about. The danger there is that if someone outside of walk_borders gains a pointer to the state, then you have all the problems of shared state.

so, it would be better to have a way to say, underneath walk_borders, you have access to this shared state, but not outside of it

i guess this is what monads do, but it would be better to have a more modular, commutative, simpler way to do it


need a more concise way to do something like:

    try:
        regions.remove(0)
    except KeyError:
        pass

mb just "regions.remove(0) except KeyError? {}"

---

http://jmoiron.net/blog/whats-going-on/


verilog:

" The <= operator is used for non-blocking assignments. All that means is all the values assigned with <= get assigned immediately in parallel.

When you use just =, which is used for blocking assignments, that line must finish before the next one can happen. "


mb just use "import libname 3.1" or somesuch to force importing to say which version of the library is being used? or at least automatically make a manifest file with the current versions

also, make something that works like python's virtualenv, as described here: http://dabapps.com/blog/introduction-to-pip-and-virtualenv-python/ except that it should probably 'auto-activeate' based on either the cwd or the script's working directory, but with some sort of kludge so that it only checks once for an 'env' directory, rather than checking for a cwd-based path for each separate import, which apparently is responsible for much of ruby's slowness

  (also mb take a look at python 3.3's venv?)

apparently there's this too:

http://virtualenvwrapper.readthedocs.org/en/latest/

" Man, global system-wide installations that require admin rights by default? That's certainly something! Quite the stark comparison to Node.js and npm, where everything is installed locally into the current directory (under node_modules) by default, and "global" installation is actually a per-user installation. Tricking pip with virtualenv seems to get you pretty close to what you get by default with npm, albeit still somewhat more clunky. But to be fair, most other package managing solutions seem to pale in comparison to npm :-)"

"

phren0logy 2 days ago

link

Nice article, but after using leiningen (the clojure solution to a similar problem, based on maven), it's really hard to go back to something like this. I really, really wish there was an equivalent in python (really, every language I use). "

http://jacobian.org/writing/django-apps-with-buildout/

"

arnarbi 2 days ago

link

I find it best to keep virtual envs completely away from the project (I use http://virtualenvwrapper.readthedocs.org/en/latest/ which puts them by default in ~/.virtualenvs). A virtualenv is completely machine-specific.

If your project is a package itself (i.e. it has a setup.py file), then use that file to specify dependencies. On a new machine I check out a copy, create a virtual env and activate it. Then in the local copy I run "pip install -e .". This installs all the requirements from setup.py in the virtualenv, and links the local copy of my project to it as well. Now your package is available in the virtual env, but fully editable.

If your python project is not a package, you can install its dependencies in a virtual env with pip. Then run "pip freeze" to generate a list of all installed packages. Save that to a text file in your repository, e.g. ``requirements.txt``. On a different machine, or a fresh venv, you can then do "pip install -r requirements.txt" to set everything up in one go.

reply "

"

"pip is vastly superior toeasy_install for lots of reasons, and so should generally be used instead."

Unless you are using Windows, as pip doesn't support binary packages. "

http://perlbuzz.com/2013/04/ack-20-has-been-released.html


http://blog.clifreeder.com/blog/2013/04/21/ruby-is-too-slow-for-programming-competitions/


" This is why I am annoyed by the trend of interpreted languages. You don't really need an interpreter to have safety, easy memory management, resizable data structures and a convenient syntax. Compiled languages don't need to be cumbersome like C, Pascal or Fortran.

Fortunately the Go and Rust are bringing some fresh ideas to the compiled language camp. Maybe D too, but it has been too long in a hobby-ish state...

"


singular 982 days ago

link

I had horrific experiences trying to get Racket namespaces to work correctly (in Racket, a namespace is a value rather than simply a name). I found I couldn't get files to use the same namespace no matter how hard I tried, and without a specific namespace some code I was eval-ing simply refused to work.

I suspect Clojure lacks this 'feature' :-)

It's frustrating because I'm sure Racket has a lot to offer but not being able to do something really simple (even after hours and hours of trying) completely put me off.


i guess if everything is a dict, then what we really want is just that anytime there is a dict lookup where the key is known at compile time (e.g. a method call), it is a compiler error if the compiler cannot prove that that key will be in the dict at that time

note that one easy way to deal with this error is to handle it yourself, eg.

  if key in d:
    x = d[key]
  else:
    raise Exception

in this case, the condition on the if guarantees that if we reach x = d[key], key is in dict


http://lambda-the-ultimate.org/node/4699

"

Concurrent Revisions

Concurrent Revisions is a Microsoft Research project doing interesting work in making concurrent programming scalable and easier to reason about. These papers work have been mentioned a number of times here on LtU?, but none of them seem to have been officially posted as stories.

Concurrent Revisions are a distributed version control-like abstraction [1] for concurrently mutable state that requires clients to specify merge functions that make fork-join deterministic, and so make concurrent programs inherently composable. The library provide default merge behaviour for various familiar objects like numbers and lists, and it seems somewhat straightforward to provide a merge function for many other object types.

They've also extended the work to seamlessly integrate incremental and parallel computation [2] in a fairly intuitive fashion, in my opinion.

Their latest work [3] extends these concurrent revisions to distributed scenarios with disconnected operations, which operate much like distributed version control works with source code, with guarantees of eventual consistency.

All in all, a very promising approach, and deserving of wider coverage. "


http://bost.ocks.org/mike/selection/


should be easy to make a framework automatically enclose code generating debugging code into a try/catch block


in Go and Haskell, you can define methods on unboxed objects, e.g. ints, whereas in Python you can only define methods on objects which are essentially boxed via a pointer

in Go, you don't need an 'implements' declaration

---

http://thornydev.blogspot.com/2013/01/go-concurrency-constructs-in-clojure.html

---

" The Uniform Access Principle (UAP) was articulated by Bertrand Meyer in defining the Eiffel programming language. This, from the Wikipedia Article pretty much sums it up: “All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation.”

Or, alternatively, from this article by Meyer: “It doesn’t matter whether that query is an attribute (representing an object field) or a function (representing a computation); this is an internal representation decision, irrelevant to clients accessing objects through calls such as [ATTRIB_ACCESS]. This “Principle of Uniform Access” — it doesn’t matter to clients whether a query is implemented as an attribute or a function” "

---

" Lack of Struct Immutability

By “immutability” I mean that there’s no way for a framework to prevent changes to structures that are supposed to be managed by the framework. This is related to Go’s non-conformity to the UAP (above). I don’t believe there’s a way to detect changes either.

This can probably be done by implementing immutable/persistent data structures along the line of what Clojure did, but these will not have literal forms like Arrays and Maps do now in Go. Once again, this is, in part, an issue of non-idiomatic use of Go. "

---

" Lack of Go Routine Locals

There is no way to store Go routine specific data that’s accessible to any function executing in the go routine. This would be analogous to thread locals, but thread locals don’t make a lot of sense, maybe no sense at all, in Go. Yes, this is just an inconvenience but, again, the only ways that I’ve come across that get around it require error-prone, non-idiomatic, boiler-plate code. "

maybe Jasper semiglobals also serve as a replacement for threadlocals in many (but not all) cases?

---

Weak References

---

temporarily commenting out code should not cause it not to compile

---

go's delegation anonymous members is interesting:

http://xampl.com/so/2012/06/27/followup-to-a-rubyist-has-some-difficulties-with-go-limits-to-polymorphism-and-method-dispatch-in-go/

type A struct{}

func (A) Name() { fmt.Println("A") }

func (self A) SomeAMethod?() { self.Name() self.Name() }

type B struct { A }

func (B) Name() { fmt.Println("B") }

v.SomeAMethod?() == v.A.SomeAMethod?()

when v.SomeAMethod?() is called, it calls v.A.SomeAMethod?(), and the receiver (like the 'self', i think) is v.A, not v. So A is delegated to.


i guess the simplest syntax for multiple, optional return args would be something like:

def f(x): return 1, opt1/5, opt2/7

a = f(0) a == 1 a, b/opt2 = f(0) a == 1 and b == 7

really we want this to mirror the calling syntax, so are there default return args too? how about:

a, b/opt2/3, c/opt3/10 = f(0) a == 1, b == 7, c == 10

a disadvantage of this is that we give up the opportunity to do an easy destructuring bind on a single return argument, e.g. we can't do:

def g(x): return [1,2,3]

a,b,c = g(x)

but we can still do this more verbosely:

[a,b,c] = g(x)


so do we want to use commas for grouping or for multiassignment? or is there an 'assignment context' (either an lvalue or function args)? but isn't 'f x y z' where we would want an assignment context?

could we use , for multiassignment and ,, (and ,,, etc) for grouping?


perl -lane


this should be much shorter in Jasper:

  1. !/bin/sh

if [ $# -ge 1 ]; then cat textfile.txt

else cat textfile.txt fi
perl -e "local \$/; <> =~ m/$1(.*)/si; print \$1;"something
something

---

in particular, there should be something shorter than 'local \$/;'

---

perl's <> operator

--

" Clojure favours discrete components that do one particular job. For instance, protocols were introduced to Clojure to provide efficient polymorphism, and they do not attempt to do anything more than this.

Ruby is an object orientated language, and tends to favour grouping together a wide range of functionality into indivisible components. For instance, classes provide polymorphism, inheritance, data hiding, variable scoping and so forth.

The advantage of the Clojure approach is that it tends to be more flexible. For instance, in Sinatra you can write:

  get "/:name" do |name|
    "Hello #{name}"
  end

And in Compojure, you can write:

  (GET "/:name" [name]
    (str "Hello " name))

Superficially they look very similar, but their implementation is very different. The Sinatra code adds the enclosed block to a hash map in the current class, whilst the Compojure code just returns an anonymous function. The Clojure approach sacrifices some convenience for greater flexibility. For instance, in Compojure I can write:

  (context "/:name" [name]
    (GET "/email" []
      (str name "@example.com"))
    (GET "/greet" []
      (str "Hello " name)))

Because each route is discrete and independent of any larger construct, I can easily take routes and use other functions and macros to group them together.

I may be wrong, but I don't think there's an easy way of doing this in Sinatra, because routes are bound to a class when they are created. "

---

chromatic 747 days ago

link

> Python code is inherently more maintainable.

Even Python 2.7 versus 3.x belies that.

Does every Python programmer use the same sorting mechanism? Object decomposition technique? File layout? Naming conventions? Web framework? Database? Testing strategy? Documentation format? Installation mechanism? Editor?

--

"Out of nowhere"? One of the perennial topics of the mid to late 90s was the idea that Perl needed major fixes to remove common pitfalls (e.g. the object model, syntax warts (nested structures are just horrid), etc) and be more suitable for large projects.

--

ericlavigne 1563 days ago

link

Compojure was created by James Reeves, who said:

"The language I work with from day to day is Ruby, so Compojure has a lot in common with lightweight Ruby frameworks like Sinatra, and less with existing Java frameworks. It's designed for impatient, lazy people like myself, so it's quick to install and quick to start developing in."

http://groups.google.com/group/clojure/browse_thread/thread/...

tlrobinson 1563 days ago

link

I like the DSL-ish style of routing (http method, url pattern, handler method/block). The same thing is used in Sinatra (http://sinatra.github.com/):

    get '/' do
        'Hello world!'
    end

a counterpoint to my plans for Jasper:

" You can replace all letters in a Java program with “x” and you’ll still be able to instantly guess the basic structure:

xxxxxx xxxxxxx xxxXxxxxx(xxx xxxXxxx) { xx (xxxXxxxx == xxxx) { xxxxxx xxxxx; }

    xxx (xxx x = x; x < xxxx.xxxxx; x++) {
        xxx[x] = xxxXxxx(x);
    }
    xxxxxx xxxx;}

" -- http://www.teamten.com/lawrence/writings/the_language_squint_test.html

lkesteloot 748 days ago

link

I've always suspected that Lisp is write-only. For example, it fails the squint test, which I wrote about here: http://www.teamten.com/lawrence/writings/the_language_squint ... ... I might describe macros as "anti-social" in that they help the programmer but hurt the team. This might be what you meant.

---

http://img264.imageshack.us/img264/1397/lispnd7.png

---

guelo 748 days ago

link

Perl5 has had many OO libs but that has not helped. It is when the community standardizes on best practices and standard libs,such as DBI, that you get big advancements. Recent community efforts around "Modern Perl" and the Strawberry Perl distro are helping the language stay strong.

Another example is JavaScript?'s recent big leaps forward stemming from the community's embrace of jquery and the best practices from "JavaScript?: the Good Parts".

--

---

ChuckMcM? 748 days ago

link

An alternative explanation is that Lisp doesn't have a person with a strong vision, program management skills, a thick skin, a diplomatic way to saying no, and a big ass repository.

The same argument is repeated throughout the essay, "Its not that Lisp doesn't have X, it has {X0,X1,..Xn)!" and that is the problem. Using lisp I get no or very low re-use for any code out there. Ergo I must write a lot of it myself. He recognizes it and laments:

"Look more closely: Large numbers of the kind of people who become Lisp hackers would have to cooperate with each other."

At the solution. But here is the rub, the qualities of the lisp user today are, in part, derived from how difficult it is to re-use code. People who could write anything they wanted enjoy the expressive power of lisp and tolerate the requirement to create from whole cloth basic things that should exist already.

Now posit a world where there was the Lisp equivalent of CPAN, and an aggressive authoritarian requirement on documentation, coding standards, and unit testing. Once sufficient mass had built up in this repository, even dweebs who don't know the difference between arrows and monads could use Lisp to make something useful, and they would take on the false mantle of 'lisp hacker' and the population of lisp users would swell and there would be still more things checked in and poof a thriving 'lisp' community, except it wouldn't look at all like the current community.


jleader 748 days ago

link

You make it sound like CPAN has "an aggressive authoritarian requirement on documentation, coding standards, and unit testing" (or at least that Lisp would need that to make a hypothetical LPAN successful).

CPAN definitely does not have that. What it does have are fairly strong social pressures in favor of those things, and extensive frameworks making it easier to provide those things (I'm thinking in particular of cpantesters.org providing free cross-platform unit testing support for all of CPAN).


gphil 748 days ago

link

This idea kind of reminds me of the notion that ideas are cheap, and that execution and hard work are what it really takes to create a valuable company.

It seems to me that the hard part of developing language features is developing the documentation and the tools, and it's this work that will drive language popularity much more than the features themselves.


typechecked multistage programming like int MetaOCaml?:

" MetaOCaml? is a multi-stage extension of the OCaml programming language, and provides three basic constructs called Brackets, Escape, and Run for building, combining, and executing future-stage computations, respectively. (Please read README-META file in distribution for MetaOCaml?'s syntax for these constructs). "

" 1.2 The Three Basic MSP Constructs We can illustrate how MSP addresses the above problems using MetaOCaml? [2], an MSP extension of OCaml [9]. In addition to providing traditional im- perative, object-oriented, and functional constructs, MetaOCaml? provides three constructs for staging. The constructs are called Brackets, Escape, and Run. Using these constructs, the programmer can change the order of evaluation of terms. This capability can be used to reduce the overall cost of a computation. Brackets (written .< ... >. ) can be inserted around any expression to delay its execution. MetaOCaml? implements delayed expressions by dynamically gener- ating source code at runtime. While using the source code representation is not the only way of implementing MSP languages, it is the simplest. The following short interactive MetaOCaml? session illustrates the behavior of Brackets 1 :

  1. let a = 1+2;; val a : int = 3
  2. let a = .<1+2>.;; val a : int code = .<1+2>. Lines that start with
  3. are what is entered by the user, and the following line(s) are what is printed back by the system. Without the Brackets around 1+2 , the addition is performed right away. With the Brackets, the result is a piece of code representing the program 1+2 . This code fragment can either be used as part of another, bigger program, or it can be compiled and executed. 1 Some versions of MetaOCaml? developed after December 2003 support environment classifiers [21]. For these systems, the type int code is printed as (’a,int) code . To follow the examples in this tutorial, the extra parameter ’a can be ignored. 2 In addition to delaying the computation, Brackets are also reflected in the type. The type in the last declaration is int code . The type of a code fragment reflects the type of the value that such code should produce when it is executed. Statically determining the type of the generated code allows us to avoid writing generators that produce code that cannot be typed. The code type construc- tor distinguishes delayed values from other values and prevents the user from accidentally attempting unsafe operations (such as 1 + .<5>. ). Escape (written .~ ...) allows the combination of smaller delayed values to con- struct larger ones. This combination is achieved by “splicing-in” the argument of the Escape in the context of the surrounding Brackets:
  4. let b = .<.~a * .~a >. ;; val b : int code = .<(1 + 2) * (1 + 2)>. This declaration binds b to a new delayed computation (1+2)*(1+2) . Run (written .! ...) allows us to compile and execute the dynamically generated code without going outside the language:
  5. let c = .! b;; val c : int = 9 Having these three constructs as part of the programming language makes it possible to use runtime code generation and compilation as part of any library subroutine. In addition to not having to worry about generating temporary files, static type systems for MSP languages can assure us that no runtime errors will occur in these subroutines (c.f. [17]). Not only can these type systems exclude generation-time errors, but they can also ensure that generated programs are both syntactically well-formed and well-typed. Thus, the ability to statically type-check the safety of a computation is not lost by staging. "

---

jhuni 748 days ago

link

The beauty of Lisp is in its simplicity. Lisp has the simplest syntax in the world, and the entire language is based upon five simple primitives.

All other mainstream languages complicate matters with operator precedence tables, multiple operator notations (prefix, infix, postfix, postcircumfix, etc), and many other syntactic weirdities.

---

" I remember writing a java library to do SNOBOL/Icon-style pattern matching in college because I hated the java way so much... every once in a while, I think about doing it again. "

---

technomancy 748 days ago

link

> If you are right, Clojure should end up "more powerful" than Scala but having (relative to Scala) this "problems which are technical issues in Scala are social issues in Clojure" dynamic.

Part of that is because Clojure isn't starting from scratch when it comes to conventions; there are strong admonitions as far as avoiding side-effects, favoring composable functions over macros, and re-using functionality from the JVM where appropriate.

---

http://groups.google.com/group/clojure/browse_thread/thread/3a76a052b419d4d1/d57ae6ad6efb0d4e?#d57ae6ad6efb0d4e

http://groups.google.com/group/clojure/browse_thread/thread/8b2c8dc96b39ddd7/5237b9d3ab300df8

--- " Clojure-contrib also prompted a question that every open-source software project must grapple with: how to handle ownership. We'd already gone through two licenses: the Common Public License and its successor, the Eclipse Public License.

Rich proposed a Clojure Contributor Agreement as a means to protect Clojure's future. The motivation for the CA was to make sure Clojure would always be open-source but never trapped by a particular license. The Clojure CA is a covenant between the contributor and Rich Hickey: the contributor assigns joint ownership of his contributions to Rich. In return, Rich promises that Clojure will always be available under an open-source license approved by the FSF or the OSI.

Some open-source projects got stuck with the first license under which contributions were made. Under the CA, if the license ever needs to change again, there would be no obstacles and no need to get permission from every past contributor. Agreements like this have become standard practice for owners of large open-source projects like Eclipse, Apache, and Oracle. "

---

" The growth of contrib eventually led to the need for some kind of library loading scheme more expressive than load-file. I wrote a primitive require function that took a file name argument and loaded it from the classpath. Steve Gilardi modified require to take a namespace symbol instead of a file. I suggested use as the shortcut for the common case of require followed by refer. This all happened fairly quickly, without a lot of consideration or planning, culminating in the ns macro. The peculiarities of the ns macro grew directly out of this work, so you can blame us for that. "


" Clojure 1.3, the first release to break backwards-compatibility in noticeable ways (non-dynamic Vars as a default, long/double as default numeric types).

"

---

" To take one prominent example, named arguments were discussed as far back as January 2008. Community members developed the defnk macro to facilitate writing functions with named arguments, and lobbied to add it to Clojure. Finally, in March 2010, Rich made a one-line commit adding support for map destructuring from sequential collections. This gave the benefit of keyword-style parameters everywhere destructuring is supported, including function arguments. By waiting, and thinking, we got something better than defnk. If defnk had been accepted earlier, we might have been stuck with an inferior implementation. "

--- "

It's a difficult position to be in. Most of Clojure/core's members come from the free-wheeling, fast-paced open-source world of Ruby on Rails. We really don't enjoy saying "no" all the time. But a conservative attitude toward new features is exactly the reason Clojure is so stable. Patches don't get into the language until they have been reviewed by at least three people, one of them Rich Hickey. New libraries don't get added to clojure-contrib without multiple mailing-list discussions. None of the new contrib libraries has reached the 1.0.0 milestone, and probably won't for some time. These hurdles are not arbitrary; they are an attempt to guarantee that new additions to Clojure reflect the same consideration and careful design that Rich invested in the original implementation. "

" With the expansion of contrib, we've given name to another layer of organization: Clojure/dev. Clojure/dev is the set of all people who have signed the Clojure Contributor Agreement. This entitles them to participate in discussions on the clojure-dev mailing list, submit patches on JIRA, and become committers on contrib libraries. Within Clojure/dev is the smaller set of people who have been tasked with screening Clojure language tickets. Clojure/core overlaps with both groups. "

"

At the tail end of this year's Clojure/conj, Stuart Halloway opened the first face-to-face meeting of Clojure/dev with these words: "This is the Clojure/dev meeting. It's a meeting of volunteers talking about how they're going to spend their free time. The only thing we owe each other is honest communication about when we're planning to do something and when we're not. There is no obligation for anybody in this room to build anything for anybody else." "

---

maybe a good idea to look at how other languages broke compatibility in later releases to learn what not to do

---

clojure does named arguments via destructuring:

http://stackoverflow.com/questions/3337888/clojure-named-arguments

however, imo this is a little verbose:

(defn blah [& {:keys [key1 key2 key3]}] (str key1 key2 key3))

you have the cognitive effort required to parse an extra two levels of nesting (i mean the { and the inmost [), and the typing effort required to type it, as well as & and :keys

and default arguments are even more verbose:

(defn blah [& {:keys [key1 key2 key3] :or {key3 10}}] (str key1 key2 key3))

python's

def blah(key1=None,key2=None,key3=10): return '%s%s%s' % (key1, key2, key3)

seems easier to read and to type (and would be more concise, if there were not keyword args without defaults in the example, or if python had a variadic 'str')

---

---

in any case, good idea to include all of clojure's destructuring stuff, such as:

clojure's & destructuring:

(defn blah [& [one two & more]] (str one two "and the rest: " more))

  1. 'user/blah user> (blah 1 2 "ressssssst") "12and the rest: (\"ressssssst\")"

and map destructuring:

user> (defn blah [& {:keys [key1 key2 key3]}] (str key1 key2 key3))

  1. 'user/blah user> (blah :key1 "Hai" :key2 " there" :key3 10) "Hai there10"

---

Icon's success/failure control flow ('goal-directed execution')

is there some way to generalize this, e.g. to have code demarcated by sigils which uses a different control flow?

i guess macros could do this, of course, but is there a useful common case/algebra that we should standardize/parameterize?

---

should always be asking people 'what did we do wrong that we should have done differently if we didn't care about backwards incompatibility?', so that we accumulate a list of such things and can consider doing a bunch of them at once for a major revision. e.g. in that blog post where the Clojure guy said:

" The growth of contrib eventually led to the need for some kind of library loading scheme more expressive than load-file. I wrote a primitive require function that took a file name argument and loaded it from the classpath. Steve Gilardi modified require to take a namespace symbol instead of a file. I suggested use as the shortcut for the common case of require followed by refer. This all happened fairly quickly, without a lot of consideration or planning, culminating in the ns macro. The peculiarities of the ns macro grew directly out of this work, so you can blame us for that. "

this sounds like he thinks it could have been done better...

--

Icon's success/failure control flow ('goal-directed execution')

some of the examples in the wikipedia article involve if and while statements that exit, doing nothing more, as soon as a failure is returned -- e.g. when an exception is caught. as the wikipedia article notes, this can be imitated by a try/catch block in Java. so mb all we need is a syntactic sugar macro operator that surrounds a block and modifies it to say, 'if this block is exited via an exception, catch the exception and do nothing in the catch block', e.g. if the operator were '!!!', then:

!!!while write(read())

would expand to

try { while (write(read())) } catch (Exception e) { do nothing, exit the loop }

---

http://www.nngroup.com/articles/ten-usability-heuristics/

--

in production Jasper code, everything should be typed (by which i mean, not type Any) except if you are writing a library to do metaprogrammy stuff (the user of this library, or perhaps the library itself using explicit type manipulation to create types which are then passed to asserts) should use assertions to type its results)

need a notation for 'take the type which is in variable x, and assert that variable y is of that type'

--

is a fundamental notation like &x, *x for taking the reference to x and dereferencing it even needed, or is this just &(x) = [x], *([x]) = x within some perspective?

--

could use the convention opX arg1 Xarg2 arg3, where x is any symbol, to denote that a functor should be applied to 'op' with reference to input 'arg2', and the arguments arg1 arg2 arg3 should then be applied to the result. X could mark multiple guys, or you could have something like opX arg1 (X1)arg2 (X2)arg3 to distinguish between marks on the various arguments, or perhaps XX XXX is better than (X2) (X3), although i doubt it.

useful examples:

op- arg1 -arg2 arg3: the inverse of op with respect to arg2 after the other arguments have been fixed

op@ arg1 @arg2 arg3: map op over the list arg2 where the other arguments are fixed

ok how do you represent -7 then? --7? or i guess just 7-? mb should make it

  -op arg1 arg2- arg3

so that -7 is still negative 7?

do we really need the @ on the left in @op arg1 arg2@ arg3 ? or can we just say op arg1 arg2@ arg3?

--

perhaps have capitalized but not uppercase symbols denote, not only labels with a generic meaning, but also macros:

f x <-- you know its not a macro

Macro x <-- you know that Macro is either a macro, or a generic label; in any case, it's a location where some metaprogrammy stuff is being applied

--

could have

start..step..end

which works like Python's arange (i think i've mentioned that a million times before)

--

could allow the user to define things which work like

start..step..end

but with user defined symbols

e.g.

a&#c would act like a&#b&#c with a default b

--

i guess backslash is a character escape (letting you write newlines and unicode and whatever), and $ is a logical escape (string interpolation, multistage programming)

--

if we in general distinguish between attached and unattached symbols, then for each symbol we get to define three things: its infix meaning, its prefix meaning, and its postfix meaning.

+ could be: infix: concatenate two lists prefix: wrap (e.g. +x = [x]) (or perhaps append; but that's combining prefix and infix, e.g. a +b -> a + [b]) postfix: like x++ in C

otoh we may want to always and only use the op- arg1 -arg2 arg3 pattern for prefix and postfix meaning..

my best guess is: only sometimes use the op- arg1 -arg2 arg3; have + be:

  infix: concatenate two lists
  prefix: append; a +b -> a + [b]
  postfix: like x++ in C

if the user wants to define more things to behave like op- arg1 -arg2 arg3, there is a designated symbol for that, e.g. if the designated symbol were %, then op%USER_SYMBOL arg1 %USER_SYMBOLarg2 arg3

note that in any case i am still sticking with doubling the arithmatic symbols for their arithmatic meaning, e.g. ++ for addition, -- for subtraction, for multiplication, and either %% or for division (i'm leaning towards %% for division; mod can be the word 'mod') how then do you do something like marking subtraction with -? i guess you would use the user symbol for that pattern, e.g. op%- arg1 %-- arg3 hmm still ambiguous b/c where are the symbol boundaries.. mb better use parens (attached parens) for that: op- arg1 (-)- arg3

(or, as per the note above about -7, reverse prefix and postfix so that -7 is still negative seven as people expect)

--

still thinking about how to support a generic datastructure thingamajig that allows a program to easily keep track of the past state of a datastructure or set of datastructures so that it's really easy to implement undo/redo. perhaps what is needed is to extend the notation of database transaction, not just to a concurrency primitive, but also to a way to denote an instant of atomic logical time

should also look at how datomic handles this ok, it looks like datomic says "A datom consists of an entity, attribute, value and transaction (time).". in other words it's just RDF triples with time also (might call it RDFt).

--

see jasperLogicNotes for an interesting thought related to RDFt

--

comparison of pandas and R dataframe: http://blog.yhathq.com/posts/R-and-pandas-and-what-ive-learned-about-each.html

--

hmm focus more on &, * (ref, deref) as versions of x -> [x], [x] -> x, i think there's something there

--

should be easy to tell Jasper to log all file open-for-write operations

--

wow! macros in python! https://github.com/lihaoyi/macropy#case-classes

the macros themselves are also super useful and should be in jasper! and in python!

--

also interesting is how he snuck this (the macros) in despite GvR?'s dislike of macros. there is a new 'import hook' intended for a quite limited use case: " Extending the import mechanism is needed when you want to load modules that are stored in a non-standard way. Examples include modules that are bundled together in an archive; byte code that is not stored in a pyc formatted file; modules that are loaded from a database over a network. "

but of course one could conside source code in any other language that can be compiled to Python to just be Python "stored in a non-standard way", so Python + macros is one such language.

---

could use the same hooks to include Jasper modules in Python, if they can be compiled to Python..

---

also js macros:

https://news.ycombinator.com/item?id=5699524

natefaubion 6 hours ago

link

If you're interested in doing some of this stuff in JS, there's sweet.js[1] for macros. I've also implemented some similar stuff as libraries: adt.js[2], matches.js[3], and tailrec.js[4]. By implementing them as libraries, I've given up some of the nicety of native-looking syntax, but it requires no preprocessing and only ES3.

[1] http://sweetjs.org/

[2] https://github.com/natefaubion/adt.js

[3] https://github.com/natefaubion/matches.js

[4] https://github.com/natefaubion/tailrec.js

reply

-- another one (!)

https://github.com/mmikulicic/metacontext

--

here's a lisp in python:

https://github.com/paultag/hy

---

-- http://groups.csail.mit.edu/mac/users/gjs/6.945/

http://groups.csail.mit.edu/mac/users/gjs/6.945/readings/

---

http://golang.org/doc/go1.1#method_values

--

http://golang.org/doc/go1.1#return

--

http://www.polymer-project.org/getting-started.html

http://www.polymer-project.org/polymer.html

--

Pxtl 3 days ago

link

I realize it hasn't been like that in about a decade, but whenever I start seeing web-component tech I get bad flashbacks to ASP.net WebForms?.

reply

polskibus 3 days ago

link

In my opinion the worst thing about WebForms? were statefulness almost everywhere and very complicated page lifecycle including the refresh of nested components. For small applications it's great, but has problems when you're trying a large, dynamic product. Composability is much easier with MVC, where you can link to controller actions instead of always depending on the way nested controls process postback events and rebuild themselves.

reply

Pxtl 2 days ago

link

Exactly. I actually kinda liked the ideas behind WebForms? - the ViewState? thing was a clever way to shoehorn statefulness into the Web. But the abstractions in it were so leaky and brittle as to be worse-than-useless. The page lifecycle was a nightmare, data-binding was frustratingly finicky about types and parameters (and it was always a roll of the dice whether a given component would use empty strings or nulls to indicate an empty value) and so on.

Web forms was full of half-assed abstractions. Every postback it re-built your control tree and re-applied the viewstate, but any actual additions or removals of controls made in code-behind were not part of this rebuild and viewstate application, so it completely half-assed the control rebuild and thrashed its own abstraction. If you added a new control dynamically on one postback, you had to re-add it every postback, which demolishes the stateful model it presents with respect to the properties of controls.

reply

--

http://coding.smashingmagazine.com/2011/09/09/an-introduction-to-less-and-comparison-to-sass/

--

" On the other hand, exchange in a *non-atomic* context is quite useful if it is reachable from your host language. A lot of performance critical algorithms run much faster with exchange so long as the data structure is monothreaded. I vaguely recall the memory part of the system sort function speeding up by >30% when exchange replaced load/store. Note that if you don't need to be atomic, a swap-word operation can be used to swap-arbitrary simply by walking wordwise along the structure.

In software, Mary is the only language I know of that exposed exchange as a language primitive (as opposed to an intrinsic function). Mary had no operator precedence, using strict left-to-right execution order including assignment. Thus "A + B * C =: D;" means load A to a register, add B, multiply the result times C, and store the product to D.

The Mary exchange operation was ":=:" and was applicable to any types for which "=:" (store) was applicable. Thus "new :=: old.next =: new.next" is a linked list insert; very convenient and natural notation IMO. The compiler used the hardware exchange operation if one existed, or loads and stores if not, depending on target.

Note that the Mary :=: was *not* an atomic. It was just a notation for a common idiom, swap, without requiring a scope, a new variable in the namespace, and the nuisance of maintaining the scratch declaration in the face of type changes of the value being swapped. Only with "auto" and C++11 can conventional languages avoid that nuisance in metaprogrammang, and Mary did a *lot* of metaprogramming. " --

The language also contains operato rs which are similar to and in many cases the same as those provi ded by Postscript. They include: dup , add , sub , neg , exch , rol , ndup , pop , <, > etc.

--

i mention the following only because propagation of some mode "back through the dataflow of ordinary expressions" may be useful in general, not just for overflow modes, which Jasper won't provide too much flexibility about b/c it is high level (e.g. we'll have exceptions):

" This is being addressed by hardware in some new architectures. The "Mill" CPU from Out-of-the-Box Computing has four variants (a 2-bit field in the opcode) for overflow handling for all integral operations that can overflow. The supported semantics are modulo, saturation, exception, and double-width result. The desired mode can be specified as a default by compiler option or reached directly in assembler or via equivalent intrinsic functions in a library.

In addition the compilers support cv-qualification-style specifiers that can be used to decorate an integral type with the desired overflow semantics: "_saturating unsigned char pixel;". The compilers propagate the overflow specification of a destination back through the dataflow of ordinary expressions: "pixel = a + b;" uses saturation for the "+" regardless of the overflow specification of "a" and "b". This provides what is nearly aways wanted (and avoids a nightmare of promotion rules), but can be overriden via intermediate destinations (including casts) or use of an explicit operator: pixel = (_excepting unsigned char)(a+b); pixel = overflow::excepting_plus(a, b); "

-- need to choose 'nan is error' or 'nan is missing value'. make it the same for all operations. (not what they did in this standards committee: https://groups.google.com/d/msg/comp.arch/iUfFfrgAyDI/mAitriy4rPcJ ). use something else for a missing value.

mb only support IEEE 754 ( https://en.wikipedia.org/wiki/IEEE_floating_point ) partially, as https://groups.google.com/d/msg/comp.arch/iUfFfrgAyDI/mAitriy4rPcJ (1) seems to say that nan is used inconsistently for error and for missing values, and we want to use it consistently for error

(1) From: Ivan Godard <i...@ootbcomp.com> Newsgroups: comp.arch Subject: Re: Index register and accumulator Date: Wed, 27 Feb 2013 10:07:18 -0800

apparently its mainly less used fns like min and max that give nan the semantics of a missing value. related issue:

" >>> Kahan's ... In its general form, even for the purest of pure >>> functions f(), 'x == y' can be true and 'f(x) == f(y)' false. >> >> For example:: >> >> float a=NAN,b; >> b=a; >> if(a==b) { equal } >> else { unequal } >> >> In this snippet, the 'esle' clause is executed. A NAN is not even >> equal to itself. >> >> But then they go on to define >> >> b=MAX(a,3.0); >> >> as 3.0 even though the result of a comparison between a NAN and a finite >> is 'uncomparable'. MIN is the same. This is new in IEEE 754-2008. It prevents >> the library from the simple ever-used definition: >> >> # define max(a,b) ((a)>(b)?(a):(b)) "

another issue:

" >> On Tuesday, February 26, 2013 12:55:16 PM UTC-6, nm...@cam.ac.uk wrote: >>> My point was the simple design error of doing BOTH of the following: >> >>> Confounding true zero and approximate zero with a positive >> infinitesimal >> >>> Defining 1/0 to be +infinity and 1/-0 to be -infinity ... > Would that it were as simple as three values. Of potential interest for > use as "zero" values are: exact zero, infinitesimal, non-negative > infinitesimal, positive definite infinitesimal, negative infintesimal, > non-positive infinitesimal, and infinitesimal but finite. Any choice > will result in a system that either will not meet some specialist > application, or will be overly complicated and error prone for other > applications. For most users I suspect the easiest to use choice would > have been two simplest values, i.e., exact zero and infinitesimal, but > an exact zero would be most useful if the other finite values had an > exact counterpart, which in turn requires setting aside a bit for a > flag, something viewed as unacceptable for 32 bit single precision. > > For the exceptional values there are a similar variety of choices: > unsigned undefined (NaN?), two signed undefineds, unsigned out of range > (Inf), two signed out of ranges, missing data, and division of finite by > exact zero. Each has potential application, but to deal with each > typically requires special case code and to deal with all results in a > horrible mess. Probably easiest to deal with for the typical user would > be a single undefined, but not much more difficult to deal with, would > would have been a single undefined and a single out of range. > > The development of IEEE 754 was strongly influenced by Kahan, who in > turn was primarilly interested in interval arithmetic. This application > did not need exact values, so it could be usefully represented by single > precision, and, for exceptional values, it benefited from signed zeros > and signed infinities. Denormals are also useful for this application, > and rounding modes are critical. However any look at the logic needed > to implement interval arithmetic reveals a large number of special > cases, often involving rounding modes. As a result it has limited > domain of application, runs like molasses in January in Greenland, and > has not achieved significant acceptance. While most users will not > change rounding modes during the run of an application, which reduces > the number of special cases, still too often the number of special cases > under this arithmetic system are difficult to keep track of and hence > error prone to deal with. "

-- immutable, copy-on-write

--

smart pipes

--

mb call perspectives 'views', like database views

--

probably not too useful, but this paper sets out some ways to make transactions more complicated by 'ACTA' which (a) allows defined non-isolation between transactions ('views'), and (b) by providing a language to state cross transaction constraints such as "if transaction T1 aborts, then abort transaction T2 also" and (c) to allow transaction delegation "Conceptually, the transaction history is modified such that the operations which were performed by Ti are now recorded as being performed by Tj. For example, in nested transactions, if a child transaction Tc commits, its effects are delegated to its parent Tp"

http://www.captura.uchile.cl/bitstream/handle/2250/7093/Fabry_Johan.pdf?sequence=1 KALA: Kernel aspect language for advanced transactions Johan Fabry Eric Tanter Theo D’Hondt?

we should ensure that our language has the capability to create extension/alternatives to the transaction notation like this

--

numpy's masks, or a nan-like missing value value?

--

how to avoid this mess?

https://www.securecoding.cert.org/confluence/display/seccode/INT02-C.+Understand+integer+conversion+rules

some ideas:

--

and remember to make the above implementable by metaprogramming within the language:

adding copy on write immutables to a language without them or a language with mutables only, adding smart pipes to a language without them or to a language with pipe used as mere function composition, adding nan and inf to reals, adding missing value to floats, adding promotion etc rules, adding cross-transaction constraints, transaction semi-isolation, and/or transaction delegation to a language without transactions, or to one with transactions

--

http://www.ac.usc.es/arith19/sites/default/files/SSDAP3_ArithmeticOnTheMillArchitecture.pdf has the interesting idea of letting the program choose between different options in case of overflow (e.g. when adding 123 + 234 in a single byte): truncated result (101), exception, saturated result (255), double width full result (357) (i dont quite see how double width full result could work). i think this is too low level for us and we want to just (a) not have overflowable data types, and (b) if we do, have exceptions. but interesting to think about.

-- " > > IBM's network edge processor, > http://www.cercs.gatech.edu/iucrc10/material/franke.pdf, 2010, has > * crypto > * compression > * RegEx? > * XML > > (How important is RegEx? matching? Hmm.... malware scanning?) > > Crypto and compression go hand in hand because both are often memory to > memory, data moves or copies with some transformations. As are RegExp? > matches, and some XML. "

suggests good candidates for libraries

(also graphics, i guess)

--

most common asm instructions?

mb these are they, not sure tho, the poster seems to have a habit of posting potentially interesting things with no explanation as a sort of game:

http://newsgroups.derkeiler.com/Archive/Comp/comp.arch/2013-03/msg00273.html http://newsgroups.derkeiler.com/Archive/Comp/comp.arch/2013-03/msg00297.html

to learn what most of those are, some quick tutorials matching a phrase like "common assembly instructions" may be useful:

http://csclab.murraystate.edu/bob.pilgrim/405/x86_assembly.html

http://programminggroundup.blogspot.com/2007/01/appendix-b-common-x86-instructions.html

the rest can be looked up (FLD, FSTP are load from memory, copy from stack to register operations for the FPU, IMUL is integer multiplication, FMUL is float multiplication). Note that although they say NOT is uncommon, negative polarity can be had via (informal notation) TEST x; JNE {return 0} {return 1} or by XOR x,1.

comments note that run-time analysis might be better:

hmm would be neat to compare with asm js

http://asmjs.org/spec/latest/

looks like we have:

numeric literals, array lookup, fn call (and return), variable assignment, ifs, while, for, break, continue, switch, module import/export,

unary ops: + - ~ (bitwise not) ! (not)

Binary Operators + -

>>> bitwise Shift right (zero fill) <, <=, >, >=, ==, !=
, &, ^, 1 (bitwise or, and, xor, shift left zero fill, shift right signal propagating)

9 Standard Library Infinity NaN? Math.acos Math.asin Math.atan Math.cos Math.sin Math.tan Math.ceil Math.floor Math.exp Math.log Math.sqrt Math.abs Math.atan2 Math.pow Math.imul Math.E Math.LN10 Math.LN2 Math.LOG2E Math.LOG10E Math.PI Math.SQRT1_2 Math.SQRT2

switch statements are compiled to jump tables, not nested ifs.

--

(quotes are from user BGB <cr88...@hotmail.com> , cant tell who that is tho, they had no sig, but i think their other handle is cr88192; oh mb its here http://sourceforge.net/projects/bgb-sys/ "pdsys" cant find other links about it tho, so i presume its dead)

" > part of the issue is that, to a large degree, it was assumed that > type-safety == security. > > type-safety is no more a general solution to security than it is to bugs. > > although type-safety is a part of the issue, it isn't really really a > complete solution, and with a little imagination, it is also possible to > sandbox code written in languages like C and C++ as well. > > > one idea here would be, for example, running C code in an "abstracted" > address space using Unix-like access rights (UID/GID and access flags), > with potentially every memory object and function having its own sets of > access rights. > > then, even with a wild pointer, you still can't stomp on memory that the > currently executing code doesn't have access rights though. > > thus, security without type-safety (nevermind whether or not it still > runs at full speed, in many/most cases the checks could be optimized > away though). > > on the other side, even if all types are known correct, it is still > possible to write code which is buggy or insecure. > > > granted, I think Java may have a per-thread security model, but off-hand > I am not certain of its specifics. > > > side note: > I am actually using such a Unix-like security model to some extent with > my scripting language. > > the basic rights are Read/Write/Execute/Special, and come at 3 levels: > User, Group, Everyone. > > note: don't confuse with the OS level security model. > > currently, access rights are attached to objects, but currently not all > objects qualify: cons-cells always have full rights (partly as conses > are lightweight and have very little side-information); > native C functions and data defaults to being "root only" (effectively, > at present only root code may directly access C land, but at the moment, > root code is "most of it", which is itself sort of a glaring security > hole). > > generally, security checks are often handled along with the dynamic > type-checks. ( a lot of this sort of sideband data is tied to objects > via the GC. object headers can be kept reasonably compact by having them > hold indices into secondary tables, rather than holding all this > information directly for every memory object. ) > > > note that if a memory object is allocated (in C land) without setting > rights, it defaults to root only, though generally the script language's > "new" operator will create objects with default ownership set for the > current UID/GID of the current thread. > > some work may still be needed before it is really useful though. > > or such... >

Been done. http://en.wikipedia.org/wiki/Capability-based_addressing. "

(seems to me that "Capability-based_addressing" is just the name for the low-level implementation of capability-based security, so for a high level language like Jasper, capability-based security is what we want)

--

"

> consider: > char tb[256]; > char *s, *t; > ... > t=tb; > while(*s++=*t++); > > how do you prevent a buffer overflow from doing damage?... > > what if 'tb' is infact, a bounds-checked array, and 't' retains memory > of the object it points to (it is either a fat-pointer or a boxed > pointer-object). > > so, when the pointer runs off the end of the array, it throws an exception.

The problem with boxed pointers (aka descriptors) is the size. On the B6500 we had tag-5 descriptor words (48 bit) with 20-bit base and length (didn't need an index because pointer arithmetic didn't exist), giving a 1MB max array size in the box. Big time for 1968 :-)

Done right, a descriptor needs base, length and index and a few flag bits. That's 16 bytes even if arrays are restricted to 1GB size. People bitch about code blow-up and icache thrashing with only the 32-64 transition. Think of the furor when ++p is a 16-byte RMW, or rather a 16RM4W (you need the base for --p).

Of course, the gain is worth it and the putative costs will be minor. Except in benchmarks "

--

"

Starting a list for the last:

Inviting additions - what are your favorite examples of special purpose registers? "

" Here's few from the Mill I can talk about (there are no computation registers):

                 cLwbReg = 10,       //  code region lower bound
                 coreNumReg = 11,    //  ordinal of core on chip
                 cpReg = 12,         //  code pointer
                 cppReg = 13,        //  constant pool pointer
                 cUpbReg = 14,       //  code region upper bound
                 cycReg = 15,        //  cycle counter (issue or not)
                 dLwbReg = 16,       //  static data region lower bound
                 dpReg = 17,         //  static data pointer
                 dUpbReg = 18,       //  static data region upper bound
                 entryReg = 19,      //  entry address of current ebb
                 faultReg = 21,      //  fault vector base
                 floatReg = 22,      //  floating-point state
                 fpReg = 24,         //  frame pointer
                 funcReg = 26,       //  function entry address
                 inpReg = 27,        //  inbound argument pointer
                 issueReg = 28,      //  instructions issued counter
                 noCacheLwbReg = 30, //  MMIO region lower bound
                 noCacheUpbReg = 31, //  MMIO region upper bound
                 processReg = 32,    //  process ID
                 rtcReg = 33,        //  real time clock
                 runReg = 34,        //  start-stop control
                 spReg = 40,         //  stack pointer, and stack region
                                    //  upper
                 stepReg = 42,       //  issue cycles within current ebb
                 threadReg = 43,     //  thread ID
                 tpReg = 44,         //  task (thread) pointer
                 trapReg = 45,       //  trap vector base "

"

How could you leave out link register and stack pointer register? :-) (These might be true SPRs or dedicated GPRs with special instructions. The zero register is a SPR mapped to GPR space, though it is read- only [unless used as a single-use value store that returns to zero after each use :-)].)

(Other SPRs mapped into the GPR space might include OS temporary/scratch registers which can be overwritten by interrupt handlers, but that is an ABI matter, so I would not count them as SPRs. For MIPS, the SPR [Stack Pointer register] is not an SPR :-), but the Link Register could be considered such since JAL and JALR both implicitly use the LR.)

Register frames might not count even if they can be swapped in and out flexibly.

If a register stack save area pointer allowed non-privileged writes (requiring privileged use to use non-privileged stores or load a trusted value [possibly only if a modified status bit is set]--or be shadowed), it might be considered an SPR (though the benefit of non-privileged writes seems small).

You are probably not looking for configuration and status registers, but it is not clear how these would be distinguished from 'ordinary' SPRs (perhaps by requiring privilege for explicit--not side-effect--writes?). Is a flags register a status register (written as a side effect of operations but perhaps in some ISAs otherwise not writable by unprivileged code)? (The Current Instruction Pointer is probably my favorite status register. :-) )

Presumably you are also distinguishing type registers (like FP, predicate/condition, address+data, branch target [Itanium had multiple BRs]) from SPRs (perhaps by singularity? [but then Power's condition registers might count as SPR--or a single 32-bit SPR??--since, apart from non-SIMD compares, which CR is used depends on the type of instruction]).

If a GPR is used as a Global/TLS pointer register but only as a hint for a Knapsack Cache, is it a SPR?

SPARC64 VIIIfx has the Extended Arithmetic Register to hold prefixes for following instructions. (It is claimed to be a non-privileged register, but it is not clear if it can be written--apart from instruction fetch--by unprivileged code. I do not know how it is handled for exceptions.)

Instruction registers have been proposed (along with execute-register), but unless such is singular it might be counted as a different type register rather than a SPR. "

--- zero register: value is constant (always 0)

link register: holds the address to return to after the current subroutine is done (the current continuation, i guess)

stack ponter register (SP or ESP): points to the top of the stack

frame pointer register (BP or EBP): points to where the top of the stack was at the time of the entry to this subroutine. this means that local variables each exist at a constant offset from this address.

--- http://web.archive.org/web/20071123230846/http://www.ugrad.math.ubc.ca/Flat/

(less well organized: http://www.tailrecursive.org/postscript/operators.html )

---

http://web.archive.org/web/20040824222221/http://www.ugrad.math.ubc.ca/Flat/stack-ref.html

 [=] [count] [clear] [copy] [dup] [exch] [index] [pop] [stack] [roll] 
 object = -

This command removes the topmost object from the stack and prints it.

copy obn ... ob1 n copy obn ... ob1 obn ... ob1 This operator adds a copy of the top n objects on the stack.

index obn ... ob1 n index obn ... ob1 obn

This operator adds a copy of the nth object in the stack to the top.

stack

This command prints a copy of the entire stack, starting from the top and working to the bottom.

roll object n-1 ... ob 0 n jroll object n-1 mod j ... ob 0 mod j This operator permutes the top n objects on the stack as described in the example .

(bayle: the others in that list do what you think)

---

super dataframes, where you can label each column but also 'columns', e.g. columns might be 'stocks' and individual columns might be AAPL, IBM, etc

mb when using dataframes you should always refer to columns or rows by name, e.g. don't say 'column IBM', say 'stock IBM'

b/c matrix dims are like stacks in that the postscript stack ops are often used (e.g. roll/exch, dup/copy), but stack programming is often inferior to using named variables

huh.. i guess thats like a hyperdigraph..

data[stock/'AAPL'][day/33] refers to a single stock value (the intersection of data[stock/'AAPL'] and data[day/33] -- actually no, see below)

data[stock/'AAPL'] refers to many stock values -- so we'll be dealing with sets a lot

but if data[stock/'AAPL'] is a set of stock values, if you can do

data[stock/'AAPL'][day/33] = 2

then can you do

x = data[stock/'AAPL'] x[day/33]

?

no, because this would implicitly link x to data (x would be a reference to a slice of data). if you assign a slice to a variable (e.g. once a slice passes thru the assignment operator), it becomes copy-on-write.

but what if we want to call a function to do our slicing for us? hmmm

[] is not exactly intersection; the intersection of data[stock/'AAPL'] and data[day/33] is a set of values, but data[stock/'AAPL'][day/33] is a single value; furthermore, as noted above, data[stock/'AAPL'] is a set but you can't separate that from 'data' and then do [day/33] on it, b/c that is talking about edges whose source is data, but only refering to them by their data-local edge index (i.e. day/33)

hmm.. maybe have a perl-esque 'reference context' and allow slices to remain references, e.g.

&s1 = data[stock/'AAPL'] &s1[day/33]

note that this reads slightly confusingly if the reader doesn't know if data[stock/'AAPL'] is a slice of data, or if it is some other kind of reference.

mb =& to initially take a reference, e.g. to copy a reference where a value copy would be expected... or just & on the RHS like C: one of:

&s1 =& &data[stock/'AAPL'] &s1 = &data[stock/'AAPL']

hey, wait, in the case of __get__, array lookup = fn application, remember? you dont need []s. not doing all the dimensions is partial fn application.

the assignment-to slice is-it-a-reference complexities only apply in case of __set__

also, this probably bears on the open questions of what level to allow __set__ to be overridden

multi-slice assignment:

  data[1:3][11:55] = [1 2 ; 3 4]

hmm.. or should []s be required for assignment, e.g.

  f 1 2 3 == 11but  f[1][2][3] = 11

?

then you could do

%s1 = f[1]

e.g. [] would be the "settable index" or "settable slice" operation

that might be nice because you could pass a partially sliced settable into an index lookup subfunction that doesn't view it as a settable, and then set the result, e.g.

data = [1 2 3 ; 4 5 6 ; 7 8 9] &s1 = data[>1] ;; &s1 = settable of data for [[7_8_9?]] &odds = &s1[find(isOdd(s1 0))] &odds = [1 2] data == [1 8 2]

(if isOdd is auto-vectorized) or if not, without using 'find',:

data = [1 2 3 ; 4 5 6 ; 7 8 9] &s1 = data[>1] ;; &s1 = settable of data for [[7_8_9?]] &odds = &s1[isOdd(&s1 0)] &odds = [1 2] data == [1 8 2]

what happened here is that (a) &s1 0 uses normal indexing into s1, but behind the scenes the fact that &s1 is a settable slice is remembered (b) isOdd is not auto-vectorized over s1 before the enclosing []s are reached, but is a predicate used to select a slice; isOdd is treating its argument as a normal node, but behind the scenes the fact that &s1 is a settable slice is remembered

hmm... i feel that that's a bad example.. or mb not and mb it shows that this isnt as complicated as i think

mb it has to do with inverses..

if we define:

 f x = (x 1) + 1
 y x = x 1
 xx = [1 2 3 ; 4 5 6 ; 7 8 9]

then without []s in the language as settable slices we'd say

 f (y xx) = 1
 xx == [1 2 3 ; 4 0 6 ; 7 8 9]

but with them we'd say

 f (y[xx]) = 1
 xx == [1 2 3 ; 4 0 6 ; 7 8 9]

or

 &s1 = y[xx]
 f &s1 = 1
 xx == [1 2 3 ; 4 0 6 ; 7 8 9]

the point i'm trying to make is that f doesn't know it was passed a settable, but the language keeps track of that behind the scenes, allowing a slice to be further sliced via subfunctions that don't treat it as a slice. it seems we have to do this tracking anyway for inverses.

is there any use to having only a subset of the indexing settables? e.g.

  f[1:3](2)[3:5] = [1 2 ; 3 4]

(note that if you just wanted to slice, you could do f 1:3 2 3:5

hmm.. this brings up the possibility that [] could be used to indicate slicing rather than settables.. e.g. f 1:3 would be a key lookup in f for key [1,2,3], but f [1:3] would be a slice

hmm...

)

ok i think i am conflating two ortho issues:

(1) if you do an index into 'data' with index '[1 2 3]' or index 'isOdd' (which is a unary boolean function), does this mean: (1a) the index is a key, e.g. look up key '[1 2 3]' in 'data' (1b) the index is a list of items, e.g. take the slice [data[1], data[2], data[3]] (this only makes sense for isOdd if you treat it as an implicit data[isOdd] -> data[find(map isOdd (values data))] (1c) the index is a unary boolean function; apply it to each value of 'data' and return the slice of 'data' for which it is true

(2) how do we indicate when we want to retain the connection between a value we lookedup, or a slice that we took, and the variable that it came from? is this the same as an ordinary reference? in a multidim array, does it mean anything to retain that connection for some dimensions and not others if we lookup over multiple dimensions? if it is an ordinary reference, and we can only retain the connection for all dimensions or for none of them, then shall we just have a special 'reference creation' operator '=&' which remembers the path through its r-value in much the same way that '=' uses the path through its l-value for settables, for example '&s1 =& data[1:3](4)[5:7]'?

one thing we COULD do for 1 is: f [1 2 3] means (1a). f[1:3] == f[[1_2_3?]] means 1b (and f[isOdd] is a type error because isOdd is not iterable). some other notation, or none, for (1c)

but that's confusing because f[[1_2_3?]] has two adjacent '['s with different semantics (the outer one means slice, and the inner one is the list constructor.

we could just use the word 'slice' for slicing, and mb have : or :: just for contiguous slicing, e.g. f[1:3] -> slice f 1..3

we could use the notation f(x) for slicing, but this might confuse newcomers mightily

we could use f((x)) for slicing type (1b).. but that looks like a good notation for (1c)

i guess slicing (1b) is really just a form of mapping and we should use the token map syntax @f [1 2 3]@ but now how to remember if the slice is contiguous? does the compiler just handle that? e.g. @f 1..4@ is treated differently than @f [1 2 3]@ ? also note that for multislicing we get to use @ multiple times, e.g. @f [1 2 3]@ [4 5 6]@ is legit

also note as above, we may not need to mark f as @f, marking the args may be sufficient, which reduces the amount of slice typing to close to Python, e.g. f 1..3@

if we do that, then we can use f((x)) for (1c), e.g. 'data((isOdd))'. Note however that a macro can't turn f(x) into f((x)) just by replacing 'x' with '(x)'; the translation from f((x)) into 'mapPredicate f x' has already been done.

now onto (2). so far i like '&s1 =& data 1..3@ 4 5..7@'

--

should we reverse the confusing matrix convention whereby

  data = [1 2 3 ; 4 5 6 ; 7 8 9]

means

  1 2 3
  4 5 6
  7 8 9

but

  data 0 = 1 2 3

e.g. traditionally the first index when you lookup is the biggest (not the smallest) index in the literals, but mb we'd like first lookup index to be the smallest index when you store, e.g. data 0 = 1 4 7 ? e.g. should be have 2-D indices that look like (column, row) rather than (row, column)?

otoh

[1 2 3 ; 4 5 6 ; 7 8 9] == [[1 2 3] [4 5 6] [7 8 9]] is nice

--

should we be able to apply predicates in indexing as:

  s1 = data[>1]

?

(shortcut for s1 = data[find(data > 1)]

--

any, all semantic token operators

to go with hyperdigraph superdataframes:

forall applied to many fns is just mapping; but to a boolean function it is map and then reduce with AND; and to a boolean set it is reduce with AND..

--

positional arguments are more concise/convenient than named

note that postscript/forth stack ops will be helpful in rearranging positional args, dimensions of multidim arrays, etc

--

simple stack ops (dup, dupn, roll etc) generalized and applied to dicts (e.g. exch: x = data[b]; data[b] = data[a]; data[a] = x]) yield some simple graph ops

--

primitive recursive

http://en.wikipedia.org/wiki/LOOP_%28programming_language%29

--

http://www.yosefk.com/blog/how-fpgas-work-and-why-youll-buy-one.html

---

see also Self:notes-computer-programming-programmingLanguageDesign-typesOfProgrammingLanguages

---

http://tylerneylon.com/a/learn-lua/

---

 "This is from Herb Sutter, one of the big names in modern C++:
    This is a 199x/200x meme that’s hard to kill – “just wait for the next generation of (JIT or static) compilers and then managed languages will be as efficient.” Yes, I fully expect C# and Java compilers to keep improving – both JIT and NGEN-like static compilers. But no, they won’t erase the efficiency difference with native code, for two reasons. First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency… In particular, managed languages chose to incur costs even for programs that don’t need or use a given feature; the major examples are assumption/reliance on always-on or default-on garbage collection, a virtual machine runtime, and metadata. But there are other examples; for instance, managed apps are built around virtual functions as the default, whereas C++ apps are built around inlined functions as the default, and an ounce of inlining prevention is worth a pound of devirtualization optimization cure.

This quote was endorsed by Miguel de Icaza of Mono, who is on the very short list of “people who maintain a major JIT compiler”. He said:

    This is a pretty accurate statement on the difference of the mainstream VMs for managed languages (.NET, Java and Javascript). Designers of managed languages have chosen the path of safety over performance for their designs.

Or, you could talk to Alex Gaynor, who maintains an optimizing JIT for Ruby and contributes to the optimizing JIT for Python:

    It’s the curse of these really high-productivity dynamic languages.  They make creating hash tables incredibly easy.  And that’s an incredibly good thing, because I think C programmers probably underuse hash tables, because they’re a pain.  For one you don’t have one built in.  For two, when you try to use one, you just hit pain left and right.  By contrast, Python, Ruby, JavaScript people, we overuse hashtables because they’re so easy… And as a result, people don’t care…

Google seems to think that JavaScript? is facing a performance wall:

    Complex web apps–the kind that Google specializes in–are struggling against the platform and working with a language that cannot be tooled and has inherent performance problems.

Lastly, hear it from the horse’s mouth. One of my readers pointed me to this comment by Brendan Eich. You know, the guy who invented JavaScript?.

    One thing Mike didn’t highlight: get a simpler language. Lua is much simpler than JS. This means you can make a simple interpreter that runs fast enough to be balanced with respect to the trace-JITted code [unlike with JS].

and a little further down:

    On the differences between JS and Lua, you can say it’s all a matter of proper design and engineering (what isn’t?), but intrinsic complexity differences in degree still cost. You can push the hard cases off the hot paths, certainly, but they take their toll. JS has more and harder hard cases than Lua. One example: Lua (without explicit metatable usage) has nothing like JS’s prototype object chain.

Of the people who actually do relevant work: the view that JS in particular, or dynamic languages in general, will catch up with C, is very much the minority view. There are a few stragglers here and there, and there is also no real consensus what to do about it, or if anything should be done about it at all. But as to the question of whether, from a language perspective, in general, the JITs will catch up–the answer from the people working on them is “no, not without changing either the language or the APIs.”

But there is an even bigger problem. All about garbage collectors

highest-paying-dirty-job-1

--- http://en.wikipedia.org/wiki/Automatic_Reference_Counting

https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/MemoryMgmt/Articles/MemoryMgmt.html#//apple_ref/doc/uid/10000011-SW1

http://developer.apple.com/library/mac/#releasenotes/ObjectiveC/RN-TransitioningToARC/Introduction/Introduction.html

" GC on mobile is not the same animal as GC on the desktop

I know what you’re thinking. You’ve been a Python developer for N years. It’s 2013. Garbage collection is a totally solved problem.

Here is the paper you were looking for. Turns out it’s not so solved: Screen Shot 2013-05-14 at 10.15.29 PM

If you remember nothing else from this blog post, remember this chart. The Y axis is time. The X axis is “relative memory footprint”. Relative to what? Relative to the minimum amount of memory required.

What this chart says is “As long as you have about 6 times as much memory as you really need, you’re fine. But woe betide you if you have less than 4x the required memory.” But don’t take my word for it: "

" In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. "

http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/

" However, there is no spec of how much actual memory any individual object occupies, nor is there likely to be. Thus we never have any guarantee when any program may exhaust its actual raw memory allotment, so all lower bound expectations are not precisely observable.

In English: the philosophy of JavaScript? (to the extent that it has any philosophy) is that you should not be able to observe what is going on in system memory, full stop. This is so unbelievably out of touch with how real people write mobile applications, I can’t even find the words to express it to you. I mean, in iOS world, we don’t believe in garbage collectors, and we think the Android guys are nuts. I suspect that the Android guys think the iOS guys are nuts for manual memory management. But you know what the two, cutthroat opposition camps can agree about? The JavaScript? folks are really nuts. There is absolutely zero chance that you can write reasonable mobile code without worrying about what is going on in system memory, in some capacity. None. And so putting the whole question of SunSpider? benchmarks and CPU-bound stuff fully aside, we arrive at the conclusion that JavaScript?, at least as it stands today, is fundamentally opposed to the think-about-memory-philosophy that is absolutely required for mobile software development.

As long as people keep wanting to push mobile devices into these video and photo applications where desktops haven’t even been, and as long as mobile devices have a lot less memory to work with, the problem is just intractable. You need serious, formal memory management guarantees on mobile. And JavaScript?, by design, refuses to provide them. "

" Now you might say, “Okay. The JS guys are off in Desktop-land and are out of touch with mobile developers’ problems. But suppose they were convinced. Or, suppose somebody who actually was in touch with mobile developers’ problems forked the language. Is there something that can be done about it, in theory?”

I am not sure if it is solvable, but I can put some bounds on the problem. There is another group that has tried to fork a dynamic language to meet the needs of mobile developers–and it’s called RubyMotion?.

So these are smart people, who know a lot about Ruby. And these Ruby people decided that garbage collection for their fork was A Bad Idea. (Hello GC advocates? Can you hear me?). So they have a thing that is a lot like ARC that they use instead, that they have sort of grafted on to the language. Turns out it doesn’t work:

    Summary: lots of people are experiencing memory-related issues that are a result of RM-3 or possibly some other difficult-to-identify problem with RubyMotion’s memory management, and they’re coming forward and talking about them.

Ben Sheldon weighs in:

    It’s not just you. I’m experiencing these memory-related types of crashes (like SIGSEGV and SIGBUS) with about 10-20% of users in production.

There’s some skepticism about whether the problem is tractable:

    I raised the question about RM-3 on the recent Motion Meetup and Laurent/Watson both responded (Laurent on camera, Watson in IRC). Watson mentioned that RM-3 is the toughest bug to fix, and Laurent discussed how he tried a few approaches but was never happy with them. Both devs are smart and strong coders, so I take them at their word.

There’s some skepticism about whether the compiler can even solve it in theory:

    For a long while, I believed blocks could simply be something handled specifically by the compiler, namely the contents of a block could be statically analyzed to determine if the block references variables outside of its scope. For all of those variables, I reasoned, the compiler could simply retain each of them upon block creation, and then release each of them upon block destruction. This would tie the lifetime of the variables to that of the block (not the ‘complete’ lifetime in some cases, of course). One problem: instance_eval. The contents of the block may or may not be used in a way you can expect ahead of time."

" I don’t think the real issue is inherent Javascript performance, it’s parasitic DOM loss."

" Technically superior, even when measurable in performance, isn't always "superior" in the real life. Sure, the fact that a programming environment with bare debugging capabilities, almost no profiling and static analysis capabilities to speak of and limited optimization options at best is the best you can get for mobile applications is sad, but I do hope it's just the difficult beginning of an otherwise productive application platform. "

http://ecn.channel9.msdn.com/content/WhyCPPCB2011.pdf

---

in Go, capitalization means a symbol is public

---

http://www.cs.bham.ac.uk/~hxt/cw04/barker.pdf


http://lambda-the-ultimate.org/node/4754#comment-75596

"

Parametric polymorphism vs subtyping

    There are a lot of programmers confused about parametric polymorphism and generics as well. 

While true, I find that most of these programmers are mostly confused about what they are actually confused about. It's not really parametric polymorphism but rather its interaction with subtyping. Ultimately, it's subtyping that is making it complicated, not parametric polymorphism as such (which is straightforward on its own). Without bounded quantification, it just isn't as visible how complicated subtyping is, and how intricate the assumptions are in many OO-style abstractions based on it.

Or in other words, those who don't understand generics usually just fail to understand the implications of subtyping. By Andreas Rossberg at Fri, 2013-06-07 13:36

succinct paper/pointer?
login or register to post comments

sorry to be dense; it sounds like something everybody in plt knows, but i'm having trouble googling up stuff on it, but i did hit something that to my clueless eyes claims to have resolved it (and maybe the paper even explains it enough for programmers to realize what they are actually confused about, but i'm still struggling.) By raould at Fri, 2013-06-07 20:42

Subtyping and parametric
login or register to post comments

Subtyping and parametric polymorphism are both quite simple on their own, and very complex together. If you gave up many of the functions that you are used to using with parametric polymorphism, subtyping is actually quite simple. It is definitely intuitive in its natural form, and becomes non intuitive only when combined with non intuitive mathematical type theory. By Sean McDirmid? at Fri, 2013-06-07 22:05

Intuitions for subtyping are
login or register to post comments

Intuitions for subtyping are often misleading or fallacious, e.g. with regards to covariance and contravariance when operating on functions, collections, manipulation of structure (the ellipse vs. circle question). Intuition is valuable only when it leads in the right direction (i.e. 'intuitive' can be good or bad).

Is the idea of 'subtype' really worthwhile? We might achieve more precision and reuse (due to less loss of information) focusing on dependent types. By dmbarbour at Fri, 2013-06-07 22:52

"
login or register to post comments

--

Beyond Beyond Monads

The Sequential Semantics of Producer Effect Systems

    Since Moggi introduced monads for his computational lambda calculus, further generalizations have been designed to formalize increasingly complex computational effects, such as indexed monads followed by layered monads followed by parameterized monads. This succession prompted us to determine the most general formalization possible. In searching for this formalization we came across many surprises, such as the insufficiencies of arrows, as well as many unexpected insights, such as the importance of considering an effect as a small component of a whole system rather than just an isolated feature. In this paper we present our semantic formalization for producer effect systems, which we call a productor, and prove its maximal generality by focusing on only sequential composition of effectful computations, consequently guaranteeing that the existing monadic techniques are specializations of productors.

http://lambda-the-ultimate.org/node/4754#comment-75515

http://www.cs.cornell.edu/%7Eross/publications/productors/

---

" Virgil: a statically-typed language balancing functional and OO features

In PLDI this year: Ben Titzer, "Harmonizing Classes, Functions, Tuples, and Type Parameters in Virgil III" [pdf]

    Given a fresh start, a new language designer is faced with a daunting array of potential features. Where to start? What is important to get right first, and what can be added later? What features must work together, and what features are orthogonal? We report on our experience with Virgil III, a practical language with a careful balance of classes, functions, tuples and type parameters. Virgil intentionally lacks many advanced features, yet we find its core feature set enables new species of design patterns that bridge multiple paradigms and emulate features not directly supported such as interfaces, abstract data types, ad hoc polymorphism, and variant types. Surprisingly, we find variance for function types and tuple types often replaces the need for other kinds of type variance when libraries are designed in a more functional style. 

By Kartik Agaram at 2013-04-11 21:47

"
LtU? Forum previous forum topic next forum topic other blogs 19185 reads

http://lambda-the-ultimate.org/node/4716

---

http://jamie-wong.com/2013/07/12/grep-test/

---

fork/exec


http://stuartsierra.com/2011/08/30/design-philosophies-of-developer-tools

"

    Plan for integration
    Rigorously specify the boundaries and extension points of your system
    Do not depend on unspecified behavior

And a couple of ideas if you’re starting a new project from scratch:

    The filesystem is the universal integration point
    Fork/exec is the universal plugin architecture

"

" All of Git’s “modules” seem to depend on a few data structures, and higher-level (porcelain) freely mixes and matches lower-level APIs (plumbing) to provide complex functionality. "

---

https://github.com/scalaz/scalaz

https://github.com/milessabin/shapeless

---

An implementation of Scrap your Boilerplate with Class which provides generic map and fold operations over arbitrarily nested data structures,

    val nested = List(Option(List(Option(List(Option(23))))))
    val succ = everywhere(inc)(nested)
    succ == List(Option(List(Option(List(Option(24))))))

--- https://github.com/milessabin/shapeless

---

http://underscoreconsulting.com/training/advanced-scala.html

---

suggests that the following 'language strictness' features should be configurable (e.g. made obligatory or optional) per program module:

Parameter Typing Dynamic Static

Side effects: r, w, r/w Optional Mandatory

Visibility (public, private) Optional Mandatory

Pre-/Postconditions Optional Mandatory

Documentation Optional Mandatory

Parameter Mode: i, o, i/o Optional Mandatory

Exceptions Optional Mandatory

---

http://www.stephendiehl.com/posts/postmodern.html

--

you need polymorphism so that you aren't tempted to do this: http://how-bazaar.blogspot.co.nz/2013/07/stunned-by-go.html

-- haskell lenses toread todo

http://adit.io/posts/2013-07-22-lenses-in-pictures.html

https://news.ycombinator.com/item?id=6082645

---

--- javascript 'with' to dynamically extend the scope chain:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/with

note: this seems to be disliked, probably shouldn't use

at first blush it seems like just a nice generalization of 'let' and 'import'. however, http://stackoverflow.com/a/61676/171761 points out an important flaw:

" Consider the following code

user = {}; someFunctionThatDoesStuffToUser(user); someOtherFunction(user);

with(user){ name = 'Bob'; age = 20; }

Without carefully investigating those function calls, there's no way to tell what the state of your program will be after this code runs. If user.name was already set, it will now be Bob. If it wasn't set, the global name will be initialized or changed to Bob and the user object will remain without a name property. "

---

Doug feels that typescript gives him most of what he likes about C#, excepting constraints on generics, attributes, and operator overloading.

when i said, can't you just do attributes in a framework by attaching a field named '_attribute' to every object, that contains a dictionary mapping strings to strings where the keys are attribute names and the values are lists of fields in this object that have that attribute (attributes which don't appear in the keys aren't attributes of any fields of this object. He replied, 'you're not thinking like a static programming language programmer', and explained that that means that if you change the name of a field, you have to also change the name in the _attributes dict, and you might forget, causing inconsistency.

-- random thing tomb read http://www.2ality.com/2012/02/js-pitfalls.html

--

an object-scoped 'self' or 'me'

--

with the HTML components proposals, ppl talk about the idea of components having their own 'Shadow DOM', meaning (i think) an encapsulated namespace associated with an HTML element. Only code within the encapsulated HTML element can access this namespace.

i wonder if we could do something similar? essentially have a special variant of perspective, a perspective family, all of whom share the same global semantics, and all of whom share most of their nodes, but each of whom have some extra 'shadow nods' available only in that perspective. generalize this, of course. hmmm..

--

WPF 'lookless' criterion: a UI component should never assume a specific hierarchy for its parts (e.g. don't assume that the "OK" button is contained as a part of the same window that the dialog text is in), rather it should allow the client to specify an arbitrary mapping between its parts and actual UI elements

--

js-like (?) use of bob.a to be equal to bob['a']?

or do we want to also adopt Python's convention of implicitly passing 'self' in this case? and distinguishing between fn calls and dict lookups?

-- a comment of JS's 'this':

hmm, actually, i dont really see why ppl dont like it, it seems much like Python's 'self', only implicit, and only (i think) without a distinction between an 'object' and a variable in a lookup table. the trouble with inner functions seems to come from this implicitness and lack of distinction; you can't distinguish between an inner function and a subobject, so the inner function's 'this' gets shadowed; and you can't prevent it just by renaming 'self', because you don't choose self's name. the trouble with comprehension also seems to come from this implicitness. ironic, since ppl (including me) complain about that explicitness.

--

perspectives must be first-class, of course.

in this case the idea of a family of perspectives to implement 'shadow DOMs' can simply be a dict of perspectives whose keys are nodes in the graph.

there are some algebraic properties that are being assumed, tho:

similarly, in the PieTrust? reputation graph, perspectives have the property that the nodes and edges and edge weights are identical, but only the labels on the nodes change.

otoh we may want to say that a 'perspective' cannot alter the nodeset, only the edges.

ah, i got it. yes, perspectives vary the edges and edge labels and node labels.

the thing that varies the nodeset is a 'scope'. a scope is the name of that data structure that maps symbol names to nodes. in practice it is realized as a 'composite scope' object, which is a list of scopes; to lookup a symbol, you try the last scope in the list, and if you haven't found it yet then you try the second-to-last, etc. note that this is a special case of a handler tree without branches (single parent), and the same implementation should be used (which allows for multiscopes, e.g. scopes that return multiple nodes for each symbol).

note that the idea of generalizing a lookup in a dict into a lookup in a handler tree of dicts should itself be abstracted.

so a shadow DOM ish thing would be implemented by a 'global scope' and layered on top of it a bunch of 'local scopes', one for each node.

--

in general, however, handler trees are not just for looking up symbols (exact matching names), but for matching conditions

--

need to be able to explicitly represent e.g. that sort of algebraic/ontological rule relating different 'shadowdom' vs. 'node label' perspectives

--

dont forget homeoiconicity

--

in fact, any dict (node) should be transparently replacable by a handler tree!

---

--

on the theory of various control operators:

A Library of High Level Control Operators (1993) by Christian Queinnec

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.4790 ~/papers/programmingLanguages/libraryOfHighLevelControlOperators.pdf

"shift to control" https://www.cs.indiana.edu/l/www/ftp/techreports/TR600.pdf#page=103 (~/papers/programmingLanguages/proc5thWorkshopSchemeAndFunctionalProgramming.pdf PDF page 103, "shift to control") (cites previous)

Adding delimited and composable control to a production programming environment http://dl.acm.org/citation.cfm?id=1291178 http://users.eecs.northwestern.edu/~robby/pubs/papers/icfp2007-fyff.pdf ~/papers/programmingLanguages/practicalDelimitedAndComposableControl.pdf (cites previous)

The Theory and Practice of Programming Languages with Delimited Continuations http://repository.readscheme.org/ftp/papers/plsemantics/danvy/db_thesis.pdf (cites shift to control)

Abstracting control O Danvy, A Filinski - Proceedings of the 1990 ACM conference on LISP …, 1990 - dl.acm.org Abstract The last few years have seen a renewed interest in continuations for expressing advanced control structures in programming languages, and new models such as Abstract Continuations have been proposed to capture these dimensions. This article investigates ... Cited by 289 Related articles All 19 versions Cite

Representing control: A study of the CPS transformation O Danvy, A Filinski - Mathematical structures in computer …, 1992 - Cambridge Univ Press Abstract This paper investigates the transformation of λ ν-terms into continuation-passing style (CPS). We show that by appropriate η-expansion of Fisher and Plotkin's two-pass equational specification of the CPS transform, we can obtain a static and context-free ... Cited by 307 Related articles All 16 versions Cite

http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.33.7802

---

http://lambda-the-ultimate.org/node/4786

Extensible Effects -- An Alternative to Monad Transformers by Oleg Kiselyov, Amr Sabry and Cameron Swords:

---

http://bayleshanks.com/notes-computer-programming-programmingLanguageDesign-prosAndCons-martinTheLastProgrammingLanguage

---

monads in haskell are side-effect domains

---

coq has an interesting thing where it defines natural numbers inductively via successor on zero and then defines various functions that way, e.g.

Fixpoint repeat (n count : nat) : natlist := match count with

O => nil
S count' => n :: (repeat n count')
  end.

Haskell does the equivalent thing but with an explicit test for the base case and without doing substraction by destructuring:

    replicate' :: (Num i, Ord i) => i -> a -> [a]  
    replicate' n x  
        | n <= 0    = []  
        | otherwise = x:replicate' (n-1) x  

--

could use Coq for type assertions and proving stuff about Jasper programs, in the same way than Shen uses Prolog

see also http://adam.chlipala.net/cpdt/html/Intro.html

--

Coq has generator-based inductive abstract data types (ADTs), using 'Inductive'. It has function defns using 'Definition' and 'match', or structurally recursive function defns using 'Fixpoint'.

--

---

syntax: allow multiple symbol names under one declaration. example from Coq:

" As a notational convenience, if two or more arguments have the same type, they can be written together. In the following definition, (n m : nat) means just the same as if we had written (n : nat) (m : nat).

Fixpoint mult (n m : nat) : nat := match n with

O => O
S n' => plus m (mult n' m)
  end."

--

pattern matching on multiple expressions at once. example from Coq:

" You can match two expressions at once by putting a comma between them:

Fixpoint minus (n m:nat) : nat := match n, m with

O , _ => O
S _ , O => n
S n', S m' => minus n' m'
  end."

--

use of _ for dummy variables

--

" the Uniform Access Principle states that “All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation.” "

--

re: ember vs. angular:

" rogerthis 2 days ago

link

Everybody is trying to emulate the old (VB, Delphi, etc) way.

reply

10098 2 days ago

link

I was just thinking about this yesterday. Remember how easy it was to build a working UI in Delphi? Throw some controls on the form with a few clicks, maybe write some code, and it just worked. By comparison, integrating various javascript widgets into existing applications still remains such a pain, you'd think we've receded into the stone age of application development. "

--

match statements in coq are like and-or trees are like jasper handler trees

--

toread++: https://speakerdeck.com/stilkov/clojure-for-oop-folks-how-to-design-clojure-programs

toread http://www.slideshare.net/yoavrubin/oop-clojure-style-long

--

crazy confusing error examples in http://www.haskell.org/haskellwiki/Generalised_algebraic_datatype

--

"

Keep in mind that the unification above works because there is a unique isomorphism between the types and identity functions in a programming language. This is the essential criteria that justifies unifying two constructs in a programming language.

When one syntactically unifies constructs that are conceptually distinct, the result is less justifyable.

One example is the unification of functions and lists in LISP - which creates some very interesting possibilities for introspection, but it means that functions carry around a lot of observable metadata that breaks foundational properties like parametricity and extensional equality.

Another example is Java and C#'s unification of value types (like int) and object types (like Int). Though C#'s approach is more automatic, both create strange observable properties, such as exposing pointer equality on boxed Int's that differs from the underlying int equality.

In the long-run, such unification of disparate concepts will be recognized as "clever hacks" rather than valid programming language design practices. " http://lambda-the-ultimate.org/node/1085#comment-11645

--

http://tech.puredanger.com/2011/10/12/2-is-a-smell/

--

learn how clojure isa works, see http://clojure.org/multimethods

--

http://lambda-the-ultimate.org/user/3938/track

(i wonder what Tom Lord is up to these days? i can't find a website about him)

btw some random advice from Lord that may be relevant to me: http://lambda-the-ultimate.org/node/2769#comment-41563

--

" Constraints have been left to language definitions (and scoped globally in languages) because we haven't developed ways to define and enforce (and scope) constraints equivalent in expressive power to the ways we can define and call (and scope) procedures. ..

In the absence of a set of constraints chosen by the caller and enforced by the runtime about what a callee is not allowed to do, the power of these function calls absolutely precludes on safety grounds a lot of useful programming techniques that would exploit the extended function calls: patterns like callbacks or client code, for example, become visibly insane when a callee can make a binding injection in your environment that may shadow an existing binding. " -- http://lambda-the-ultimate.org/node/4609#comment-72916

--

http://lambda-the-ultimate.org/node/4686

" Scheme language conundrum regarding delay and force. The following is a question originally posed by Alan Manuel Gloria to the Scheme Standardization list. I think that it is a particularly good question, so I thought I'd share it.

(define x 0) (define p (delay (if (= x 5) x (begin (set! x (+ x 1)) (force p) (set! x (+ x 1)) x))))

(force p) => ?? x => ??

Delay and force have their usual (for PL theory discussions) meanings; the questions are about promises that force themselves in the course of their forcing (ie, forms for which 'force' causes a recursive invocation of 'force' on the same promise). The question becomes interesting when, as in the above case, the recursion is nontail. Some current implementations return 5 for p and 10 for x and some throw an exception reporting a reentrant promise. Some implementors claim that 5 for p and 5 for x ought to be allowed since the delayed form returns at the point of forcing and no further computation ought to be done on it, whereas others say that x is visible outside the scope of the delayed computation and therefore the computation of the delayed form must be completed for its side effect on x, even after the force has taken place. FWIW, I agree with the latter point of view. The questions, roughly, are these:

    Ought the above be legal with specified results, legal with undefined results, or is it an error?
    If it is an error, must an implementation detect and report the error via the exception mechanism, or may the dread curse of nasal demons be invoked?
    If it is legal, with specified meaning, then what specific meaning ought it to have?
    If it is legal, then may force return while the remainder of the computation caused by the force completes in another thread (making x subject to a possible race condition)?
    If it is made legal, is there any legitimate use for it? (ie, is there any useful purpose the language would fail to serve if it were banned)?

It complicates the discussion somewhat that this language "feature" may interact with reified winding continuations. So, bearing in mind that the committee is under no obligation to listen to or agree with this forum should a consensus be reached here, what do you think this code should do? "

--

http://lambda-the-ultimate.org/node/4686#comment-74357

--

http://www.aosabook.org/en/

---


hard problems in language design:



cheap syntactic sugar for ignoring exceptions of a certain type note above, an important special case is converting a KeyError? to returning nil

---

ntoshev 749 days ago

link

Personally I like generators, generator expressions and list comprehensions much more than the Ruby equivalents (chaining each/filter/etc). Python is cleaner overall, and if you want metaprogramming you can still do a lot of it. Also Python has better libraries and runs on App Engine.

-- http://news.ycombinator.com/item?id=682171

I wish Ruby were good, but it's so fucked:

  Matz's decision-making process
    He tries to make Ruby be all things to all people
      Lots of confusing sugar and overloading baked in
    I much prefer Guido's hard pragmatism
  The pointless panoply of function types: Methods, Blocks, Procs, Lambdas
    All intertwined and yielding into one another.
    I love that in Python there is only one:
      Objects with a __call__ method, defined metacircularly.
  The culture of adding/overloading methods on base classes
    Many gems do this en masse, and there are a lot of low-quality gems
      Seriously, Ruby might have a ton of new gems for everything, but they
      are almost universally awful. A culture of sharing any code that *could*
      be a module, no matter how trivial, leads to immature crap being widely
      used because it was there already, with a mess of forks to clean up
      afterwards. At least most of them are test-infected...
        Python does come with a few stinkers, mostly ancient syscall wrappers.
    Especially disastrous because it's unscoped, and infects the whole process
      For a language with four scoping sigils it sure fucks up scope a lot
    The syntax practically begs you to do it, anything else would look shitty
  The Matz Ruby Implementation
    The opposite of turtles-all-the-way-down (Smalltalk crushed beneath Perl)
    It actively punishes you for taking advantage of Ruby's strengths
    The standard library is written almost entirely in C
      It doesn't use Ruby message dispatch to call other C code.
      That means that if you overload a built-in, other built-ins won't use it
    Anything fiddly that's not written in C will be dog slow

--- http://news.ycombinator.com/item?id=682305

http://www.zedshaw.com/blog/2009-05-29.html talks about library inconsistent conventions between functinos and their inverses (and near-inverses) in Python, e.g. mystuff.remove(mything) vs. mystuff.append(mything) vs. del mystuff[4]

---


Footnotes:

1. ,