Table of Contents for Programming Languages: a survey
Lisp is a language family, not a single language. In lists of popular or beloved languages, "Lisp" often comes up, but it's unclear which "Lisp" is the most popular or beloved among the Lisps.
Common Lisp and Scheme are sometimes identified as the two most popular Lisp dialects (eg. "Today, the most widely known general-purpose Lisp dialects are Common Lisp and Scheme." -- [1]).
In online discussions about Lisp, I get the impression that the following four variants or subfamilies are the most popular or talked about:
Common Lisp is a committee-based standardization that was intended to unify a handful of (but not all) existing Lisp projects and dialects in the early 80s (to place this in context, Lisp was invented in 1958, so there were already a number of Lisp variants and descendents by 1980). It gained a lot of momentum and many commercial implementations of it were written. It is standardized but does not have a canonical implementation. It is said to be a 'large' language, and a good practical choice for getting real projects done in Lisp.
Scheme is a more minimal Lisp invented in 1975 (or, some would say, between 1975 and 1980), which also supports some features common lisp doesn't, such as a language primitive for first-class continuations (also, tail call optimization is required by the Scheme spec but not by the Common Lisp spec). It permits a functional style of programming. It is said to be small and elegant. It is standardized but does not have a canonical implementation. One Scheme variant, Racket, is said to have advanced facilities for metaprogramming and for prototyping new programming language features.
Clojure is a more recent (2007) Lisp variant. It compiles to the JVM and is known for its embrace of immutable data structures and its focus on cuncurrency constructs. It has a canonical implementation but is not standardized.
Elisp (started around 1984 or 1985 [2] [3]) is a language purpose-built for Emacs, a popular text editor software application. Most of Emacs is implemented in Elisp; Elisp is exposed to the user, who can use it as a scripting language to create Emacs extensions or to modify the behavior of Emacs. Because Emacs is used by many people, Elisp is used by many people; however, Elisp is not much used for anything outside of Emacs. Furthermore, Elisp's creator now promotes GNU Guile, a Scheme implementation, over Elisp (although Guile can also run Elisp) [4].
I decided to choose a couple of these to focus on. I chose Racket and Clojure. I chose Racket because it is perhaps the most popular Scheme implementation, and because its facilities for metaprogramming intrigued me. I chose Scheme because it is one of the two (supposedly) "most widely known general-purpose Lisp dialects", and because it is small, and because it is said to be elegant.
I ruled out Elisp because it seems that its popularity is due not to its own merits, but to the popularity of Emacs.
Between Common Lisp and Clojure, I chose Clojure because i am intrigued by its concurrency constructs and because it is the most recently designed popular Lisp (and so I hope that it has improved upon what came before it), and also because, as my purpose is to learn the 'essence' of Lisp and to describe it in a book surveying many programming languages, i am frightened by Common Lisp's reputation as a large language.
So, because Lisp is a beloved language family, there are three separate chapters about it. In this chapter, I talk about Lisp in general, about the various variants of Lisp and comparisons between them, and about Lisps other than Racket and Clojure, although we don't spend much time on details of Lisp itself. In the chapter on Racket, we explore the language of Scheme, and also Racket-specific constructs. In the chapter on Clojure, we explore the language of Clojure.
todo: am i sure i dont want to cover CL also?
todo: Paul Graham's list of attributes of lisp; Peter Norvig's list; the one from that book about lisp (games? aliens?)
notes/todo:
Two things about lisp:
1) Many Lisp commands evaluate their arguments before applying some further transformation, but quote (abbreviated as the prefix character ') does not; the result of evaluating quote is its argument, unevaluated and unchanged. This is usually used to separate data from code. Note that this is different from saying "consider my argument as a single thing"; to do that you would enclose the argument in parentheses; for example car '(1 2 3) = 1, but car '((1 2 3)) = (1 2 3) (todo check)
2) Cons cells are just another name for the building block of linked lists. They are structs with two pointers, call them payload and next; the first element (payload) is a pointer to the payload (the data contained in this cons cell), and the second one is a pointer to the next cons cell in the linked list. You can think of a lisp list structure in two ways; as a binary tree (representing how the pointers between the cons cells are actually structured; the first cons cell in a list is the root of the tree with two children, its payload and the next cons cell; etc), or alternately as a nested structure of lists (a non-uniform tree; this is the way that the lisp language semantically interprets these things; however the underlying nature of cons cells as pairs of pointers (as opposed to flat lists) comes into play when (a) to get the first element in a list, you do car, but to get the second element in a list, you do car(cdr()), (rather than just cdr); and (b) The last cons cell in a list has a cdr of '() (rather than the value of the last item in the list)
recommended reading:
Best practices:
Features:
Good at:
Retrospectives:
Popularity of various lisps:
Opinions:
Opinionated comparisons:
common lisp implementations: http://common-lisp.net/~dlw/LispSurvey.html http://0branch.com/notes/tco-cl.html short answer: SBCL
SBCL internals:
Lisp is a language family, not a single language. What defines a Lisp?
http://www-formal.stanford.edu/jmc/lisp20th/node2.html
Quoted from McCarthy?, "LISP - Notes on its past and future", 1980:
Quoted from http://norvig.com/Lisp-retro.html:
Quoted and paraphrased from http://www.paulgraham.com/diff.html :
Comparison of various Lisps is found in the first part of the essay More than Just Words: Lamba, the ultimate political party by Kent Pitman. That essay makes the point that aside from CONS, there are few or no "common core of operators that were present throughout the family with the same name and semantics". He examines some candidates, and rejects them all:
Links:
The short answer is 'racket or clojure or common lisp'.
Comparisons:
Links:
Because Scheme is sufficiently simple, there are many variant "schemes" and many "scheme" implementations. I've chosen to focus on the variant "Racket", which has its won chapter.
Pros:
people: Guy Steele
Some of Scheme's history is described in the R7RS-small Scheme standard document.
Scheme started out as an implementation, was described in a series of published memos/technical reports and papers (the Lambda Papers, published between 1975 to 1980). The first of these, MIT AI Lab Memo AIM-349: Scheme: An Interpreter for Extended Lambda Calculus, is now sometimes retrospectively referred to as "R0RS". Another memo, MIT AI Lab Memo AIM-452: The Revised Report on SCHEME: A Dialect of LISP, published in 1978, is sometimes retrospectively referred to as "R1RS". Starting at least as early as 1981, separate projects to create separate implementations of Scheme sprang up outside of MIT, and diverged, and in 1984 a committee began meeting to standardize Scheme [38] (apparently, using consensus decision-making [39]). The result of their work was The revised revised report on Scheme, or an uncommon Lisp, published in 1985, and retrospectively referred to as R2RS. In 1986 the committee issued a revision, Revised(3) Report on the Algorithmic Language Scheme, or R3RS, which is apparently when the RnRS? convention started. R4RS was issued in 1988, and became the basis for an IEEE standard in 1991 (IEEE 1179-1990). 1988 is also when the SRFI ("Scheme Requests for Implementation") process was begun; an SRFI is a proposal/standard for a Scheme extension or library. In 1992, R5RS added high-level hygienic macros, multiple return values, a 'load' command, and eval [40]; claims that R5RS is still the most widely implemented version of Scheme today.
In 2002-2003 a Steering Committee was setup to supervise future language revisions. In 2007, R6RS was produced, breaking from the consensus decision-making process, and ratified by a vote of over 60%. R6RS "specifies a much broader language" [41]; it added Unicode, various standard library functions, modules, exceptions, a metaprogramming tool called syntax-case, and requirements to support more numeric types [42].
However, according to the Background section of the R7RS-small spec, "most existing R5RS implementations (even excluding those which are essentially unmaintained) did not adopt R6RS, or adopted only selected parts of it.". Therefore, in 2009 R7RS the Scheme Steering Committee decided to divide Scheme into two "separate but compatible" language; Scheme-small and Scheme-large. In 2013 R7RS-small was published, and as of this writing, R7RS-large is still being written.
See also http://en.m.wikipedia.org/wiki/History_of_the_Scheme_programming_language
Because it is a small language, there are many Scheme implementations written for pedagogical purposes, for practice, or for fun, and reading these is sometimes said to be a good way to learn about programming language implementation.
" Data and procedures and the values they amass, Higher-order functions to combine and mix and match, Objects with their local state, the messages they pass, A property, a package, the control point for a catch-- In the Lambda Order they are all first-class. One Thing to name them all, One Thing to define them, One Thing to place them in environments and bind them, In the Lambda order they are all first-class. " -- Scheme R2RS aka MIT AI Memo 848, https://lists.gnu.org/archive/html/guile-devel/2010-12/msg00032.html
The Scheme Steering Committee Position Statement (2009) says
"It is almost misleading to call Scheme a "programming language;" it would be more accurate to characterise Scheme as a family of dialects, all loosely related by the common features of:
going on to note that the following additional features are often necessary yet are typically implemented in a non-standardized way:
Because of the need to use implementation-dependent variants of these latter features, they call Scheme "the world's most unportable programming language".
and they also make note of "two key technologies" that should (in the future) "enable great portability":
https://www.more-magic.net/posts/internals-data-representation.html https://www.more-magic.net/posts/internals-gc.html
Racket is now built on top of Chez Scheme
Retrospectives:
Because it is so well-liked, Clojure gets its own chapter.
See chapter on chapter on Clojure.
Because the Lisp family is so well-liked, Racket gets its own chapter (as a representative of Scheme/Lisp).
See chapter on chapter on Racket.
A committee-defined effort to standardize various MACLISP variants into one language.
Reference:
retrospectives:
Cons:
Best practices and style guides and tooling and libraries:
Suggestions:
Tutorials:
Books:
Opinions:
Implementations:
When reading about lists, you sometimes get the idea that everything is built out of (nested) lists of atoms; but then other times you get the idea that everything is built out of nested cons cells, that is, binary trees with atom leaf nodes, or equivalently 'pairs', or equivalently '(nested) lists of length 2', or equivalently '2-ary nested lists'.
It's actually nested cons cells (lists of fixed length 2); n-ary lists are represented using a terminator, NIL, and eg (1 2 3 4) is represented as (1 (2 (3 (4 NIL)))).
See also Elisp manual: Lists and Cons Cells.
Links:
" Rather than parsing some ad-hoc syntax like Pascal, C or Algol, straight to a syntax tree that is hidden from the programmer, Lisp takes a layered approach: parse a simple syntax to structured data, let the programmer have a chance to mangle it, and then parse the structured data as the program source. " -- [49]
"...the reader for a basic Lisp is approx 100 lines. No fancy Common Lisp readtables, reader macros, etc., to be sure, but something that will parse integers, lists, strings, handle quote, etc." -- http://www.findinglisp.com/blog/2008/06/timeless-desert-island-language.html
http://shenlanguage.org/ http://shenlanguage.org/shendoc.htm https://github.com/Shen-Language/wiki/wiki https://github.com/Shen-Language/shen-sources/blob/master/doc/porting.md
Opinions:
Tutorials:
In order to support the small memory sizes of many machines of the time, Elisp started out with an initial focus on being small [51].
https://news.ycombinator.com/item?id=8246325
http://www.gnu.org/gnu/rms-lisp.html
http://www.slac.stanford.edu/comp/unix/gnu-info/elisp_2.html
http://www.emacswiki.org/emacs/WhyDoesElispSuck
https://www.gnu.org/software/emacs/manual/html_node/eintr/index.html
http://www.gnu.org/software/emacs/manual/html_node/elisp/index.html#Top
https://www.gnu.org/software/emacs/manual/html_mono/cl.html
http://lgmoneda.github.io/2017/03/15/elisp-summary.html
http://ergoemacs.org/emacs/elisp.html
Tutorials:
Variants and implementations:
Links:
Descended from Scheme
Pros:
Cons:
Features:
not as popular or well-regarded as Racket, although it seems to be more portable and closer to 'embeddable'.
todo:
See also Guile section of proj-plbook-plChImplementationCaseStudies, [[proj-plbook-plChMiscIntermedLang?]], proj-plbook-plChSingleIntermedLangs, proj-plbook-plChTargetsImpl.
Introductory links:
Variant implementations / libraries:
A language for 'procedural reflective' metaprogramming. See its section in proj-plbook-plChMetaprogramming for more.
http://web.cs.wpi.edu/~jshutt/kernel.html
see also:
"The "magic" is that the "$vau" operative closes over the environment when it is defined so that it can execute later in that environment--this is the "static/lexical environment". However, when you actually execute the "$vau" operative, you also pass in the "execution/dynamic" environment so that the code that the "$vau" operative runs can choose to do what it wants with the arguments--do nothing, evaluate them in the lexical environment, or evaluate them in the dynamic environment.
It's a powerful concept, but difficult to compile so got kind of relegated from the Lisp/Scheme languages." -- bsder
Applicatives are ordinary functions, that take as input the value of their arguments.
Operatives are fexprs, functions whose arguments are passed in without being evaluated. (todo is this precisely correct? Shutt never says fexprs == operatives (update; in his thesis chapter 1 it seems like he is says, yes, it's fexprs)
todo: how is this related to non-strict/lazy evaluation? With operatives, are you guaranteed to be able to inspect/reflect upon the thunk for the argument passed in (in contrast to most lazy languages, in which thunks are opaque; all you can do is pass them around, or force their evaluation). http://www.reddit.com/r/programming/comments/msci7/fexpr_the_ultimate_lambda/c33sylr 's reply says yes, this is it, but shutt's thesis says that most of what he is talking about is ortho to lazy/eager
Links:
"If you go with (hygenic) fexprs instead of macros, they're both easier to implement and more expressive, at the expense of performance.
Check out John Shutt's thesis for more information on hygenic fexprs: https://web.wpi.edu/Pubs/ETD/Available/etd-090110-124904/unrestricted/jshutt.pdf " -- [57]
" Kernel is ideal for creating small embedded languages, where you can selectively expose parts of the parent language to perform general purpose computation, without giving any access to sensitive code.
Kernel is like Scheme, except environments are first class objects which you can mutate and pass around. A typical use for such environment is for the second argument of $remote-eval, to limit the bindings available to the evaluator. If you treat an eDSL as a set of symbols representing it's vocabulary and grammar, then you bind them to a new environment with $bindings-environment, then passing a piece of code X to eval with this resulting argument as the second environment, will ensure X can only access those bindings (and built-in symbols which result from parsing Kernel, such as numbers), and nothing else.
There's a function make-kernel-standard-environment for easy self-hosting.
Trivial examples:
($define! x 1) ($remote-eval (+ x 2) (make-kernel-standard-environment)) > error: unbound symbol: x
($define! y 2) ($remote-eval (+ x y) (make-environment ($bindings->environment (x 1)) (make-kernel-standard-environment))) > error: unbound symbol: y
($remote-eval (+ x y) ($bindings->environment (x 1) (y 2) (+ +))) > 3
($define! x 1) ($define! y 2) ($remote-eval (+ x y) (get-current-environment)) > 3
$remote-eval is a helper function in the standard library which evaluates the second argument in the dynamic environment to get the target environment, then evaluates the first argument in that. The purpose of this is to provide a blank static-environment for o to be evaulated, so no bindings from the current static environment where $remote-eval is called from are made accessible to the first argument.
($define! $remote-eval ($vau (o e) d (eval o (eval e d))))
If you're familiar with Scheme, you may notice the absensce of quote.
And contrary to the opinions in the article, Kernel is the most expressively powerful language I know, and I'd recommend everyone learn it. You have a powerful but small and simple core language, from which you can derive your DSLs with as little or much power as you want them to have. " -- sparkie
"Here's a small implementation of Kernel in around 700 lines of Haskell [58]. Note that it's not Kernel proper, as it uses delimited continuations instead of the guarded undelimited continuations in the Kernel report, plus there's a few other differences and some bugs.
Anyone looking for a more complete implementation, check out klisp [59]. It's not 100%, but has some stuff not included in the Kernel report, some of it lifted from R6RS. The main thing it's missing which it desperately needs is an FFI." [60]
http://axisofeval.blogspot.com/2013/05/a-new-low-in-programming-language.html
https://github.com/marcpaq/arpilisp
written in assembly
"a Lisp-1 with shorter names and fewer parentheses than most other Lisps, and some reader macros to make anonymous functions easier to define" [61]
http://www.archub.org/arcsug.txt
https://gist.github.com/jmikkola/b7c6c644dff1c07891c698f0a527a890
https://github.com/Robert-van-Engelen/tinylisp https://github.com/Robert-van-Engelen/tinylisp/blob/main/tinylisp.pdf
99 lines of C
http://blog.fogus.me/2020/12/22/torilisp-an-ersatz-lisp-for-tiny-birds/
First non-prototype implementation of Scheme (?): http://www.paulgraham.com/thist.html
http://mumble.net/~jar/tproject/
retrospective:
" Somewhere in the T 2 effort, Kent Pitman, another Lisp wizard, came down to Yale from MIT. He and Jonathan poured an immense amount of design effort into the language, and it was just really, really *clean*. Small (but difficult) things: they chose a standard set of lexemes and a regular way of assembling them into the names of the standard procedures, so that you could easily remember or reconstruct names when you were coding. (I have followed this example in the development of the SRFIs I've done for the Scheme community. It is not an easy task.)
Larger, deeper things: they designed a beautiful object system that was integrated into the assignment machinery -- just as Common Lisp's SETF lets you assign using accessors, e.g., in Common Lisp (setf (car x) y) is equivalent to (rplaca x y) in T, (set! (car x) y) was shorthand for ((setter car) x y) Accessor functions like CAR handled "generic functions" or "messages" like SETTER -- CAR returned the SET-CAR! procedure when sent the SETTER message. The compiler was capable of optimising this into the single Vax store instruction that implements the SET-CAR! operation, but the semantic machinery was completely general -- you could define your own accessor procedures, give them SETTER methods, and then use them in SET! forms.
(This turned out to be very nice in the actual implementation of the compiler. The AST was a tree of objects, connected together in both directions -- parents knew their children; children also had links to their parents. If the optimiser changed the else-part of an if-node N with something like this (set! (if-node:else n) new-child) which was really ((setter if-node:else) n new-child) the if-node:else's SETTER method did a lot of work for you -- it disconnected the old child, installed NEW-CHILD as N's else field, and set NEW-CHILD's parent field to be N. So you could never forget to keep all the links consistent; it was all handled for you just by the single SET! assignment.) " -- [62]
has f-exprs [63]
http://blog.fogus.me/2011/05/03/the-german-school-of-lisp-2/ https://news.ycombinator.com/item?id=2509696
http://www.software-lab.de/down.html http://www.software-lab.de/miniPicoLisp.tgz "mini Pico Lisp is basically Pico Lisp but without any libraries." [64]
https://picolisp-explored.com/
http://sites.google.com/site/waspvm/
https://github.com/kanaka/miniMAL
"A Clojure inspired Lisp implemented in less than 1024 bytes of JavaScript?...with JSON source, macros, TCO, interop and exception handling."
RSR5 Scheme.
retrospectives:
scheme 79 : a LISP chip.
Items in memory were 32 bit tagged elements: 24 data bits, 7 type bits, and 1 garbage collection bit. The 24 data bits could be 'immediates' or pointers.
Based on Norvig's Python toy Lisp implementation.
A lisp that compiles to Python bytecode.
Links:
" femtolisp is about 150kb, is very self-contained, and has the following features:
...it is fast, ranking among the fastest non-native-compiled Scheme implementations. It achieves this level of speed even though many primitives (e.g. filter and for-each) are written in the language instead of C. femtolisp uses a bytecode compiler and VM, with the compiler written in femtolisp. Bytecode is first-class, can be printed and read, and is "human readable" (the representation is a string of normal low-ASCII characters).
femtolisp is a simple, elegant Scheme dialect. It is a lisp-1 with lexical scope. The core is 12 builtin special forms and 33 builtin functions. ... too often I see people omit some of the obscure but critical features that make lisp uniquely wonderful. These include read macros like #. and backreferences, gensyms, and properly escaped symbol names ... Another design goal is to avoid spurious novelties. Many others offering their own "shiny new" lisp dialects get carried away and change anything that strikes their fancy. These changes have no effect except incompatibility, and often make the language worse because the new design was not as carefully thought out and has not stood the test of time. For example, how does it help to remove backquote? One design changes the syntax of quote. Some systems disallow dotted lists. ... I would like to refute the idea that...tail call optimization...makes interpreters slow. Look at the "tiny" subdirectory or the "interpreter" branch to see a pure s-expr interpreter with efficient TCO. All you have to do is keep track of whether you're in tail position, which can be done very cheaply. These interpreters are difficult to beat for speed, yet they have lexical scope and TCO. " [66]
Used in the Julia languages's implementation to write the parser.
The "33 builtin functions" are listed near the top of [67], and are:
[68] also lists the following, perhaps that page is out of date and these are no longer primitive?:
and [69] also lists the following 'nonstandard builtin functions' in addition to the above:
According to [70], the 12 builtin special forms are:
[71] notes that the following of these could be derives from the others: cond, and, or, for, prog1
The page [72] also contains a listing of standard library functions.
In various files in the 'tiny' directory (which i think contains one or more forks of the main code?), the following 10 special forms are named:
these files also contain the following list of 23 builtin functions (different from the previous list):
one of the files in the tiny directory (tiny/lisp2.c) contains a few extra builtin functions:
The file [73] begins with a table of instructions including:
Links:
https://github.com/robpike/lisp
https://github.com/akkartik/wart http://akkartik.name/post/wart
https://web.archive.org/web/20040305005602/http://modeemi.cs.tut.fi/~chery/lisp500/
A Lisp targetting Lua
"Fennel is a lispy language that compiles to, maintains the semantics of, and seamlessly interacts with Lua!" [74]
https://fennel-lang.org/ https://fennel-lang.org/rationale
Opinions:
Why would one call it 'Lisp' (which stands for 'Lisp Processor), when for example it does not do list processing as in Lisp?
> (list 1 2 3 4) [string "return list(1, 2, 3, 4)"]:1: attempt to call a nil value (global 'list') > (append '(1 2 3) '(3 4 5)) Compile error in '3' unknown:3: cannot call literal value" [https://news.ycombinator.com/item?id=18022475]
Discussions:
https://github.com/udem-dlteam/ribbit
"A small and portable Scheme implementation with AOT and incremental compilers that fits in 4K. It supports closures, tail calls, first-class continuations and a REPL."
A Lisp targetting Lua and Javascript
A Lisp targetting Lua
https://scheme.fail/ https://scheme.fail/manual/loko.html
Compiler to native
http://opusmodus.com/ uses sexprs
https://github.com/kragen/urscheme http://canonical.org/~kragen/sw/urscheme/
"Small. The implementation is about 1600 lines of Scheme, plus 700 lines of comments and blank lines. Compiled with itself on x86 Linux and stripped, the executable is 134kiB." -- [77]
"It hardly implements anything beyond what's in R5RS, and it omits quite a bit of the stuff that is in R5RS; for example, files, macros, vectors, eval, quasiquotation, call/cc, and floating-point ("inexact numbers"). Only things that were needed for the compiler to compile itself and run some unit tests are included." -- [78]
" Limitations
The following are missing: user-defined macros, eval, quasiquotation, first-class continuations, vectors, numerical constants with radix prefixes, ports, "let*", "letrec", non-top-level defines ("internal definitions"), "do", promises, non-integer numbers, bignums, load, apply, case-insensitivity, multiple-value returns. Pairs are immutable, so there are no set-car! and set-cdr!. Despite this, eqv? is not the same as equal? for lists. Procedures are allowed to take fixed numbers of arguments, or any arbitrary number of arguments, but the normal at-least-N syntax (lambda (a b . rest) ...) is not supported. Rebinding standard procedures may break your program. However, some standard procedures are treated as special forms by the compiler, so rebinding them will rarely have any effect. Integer arithmetic silently overflows. Most arithmetic operators are omitted; only 1+, 1-, +, -, quotient, and remainder are present. Very many standard library procedures are missing. Only the following 55 procedures were actually present last time I updated this list: procedure? string? make-string string-set! string-ref string-length car cdr cons pair? symbol? symbol->string string->symbol display newline eq? current-input-port read-char integer? remainder quotient < eof-object? char? integer->char char->integer list length assq memq memv append not string-append char-whitespace? char<? char<=? char-between? char-alphabetic? = char=? eqv? equal? string=? null? boolean? number? for-each map reverse string->list list->string number->string string->number write. make-string only takes one argument. read-char takes a port argument and ignores it; write, display, and newline do not take port arguments. current-input-port returns nil. map and for-each only take two arguments. number->string only takes one argument. string->symbol will be slow with enough symbols.
Extensions
These are not in R5RS.
(display-stderr foo) is like display, but uses stderr. (exit 37) makes the program exit with exit code 37. (error foo bar baz) aborts the program and sends to stderr "error: " followed by foo, bar, and baz. 1+ and 1- are as in Common Lisp. (char->string c) is like (make-string 1 c). (escape string) returns a list of character strings which, when concatenated, form a string representing the original string by escaping all the backslashes, quotes, or newlines in the string by preceding them with a backslash. #\tab is the tab character. I wouldn't think to mention it but apparently Stalin doesn't support this.
Bugs
Output is unbuffered, which accounts for a third of its run-time.
I still don't have a garbage collector, and programs crash when they run out of memory. " -- [79]
TODO https://en.wikipedia.org/wiki/PreScheme
An Incremental Approach to Compiler Construction (Ikarus Scheme)
https://github.com/oriansj/Slow_Lisp
"This is just a simple lisp interpreter that I am working on for fun. This is to help me figure out what I want to create when I write a Lisp interpreter in assembly for stage0. It also shows how a trivial garbage collector is implemented using fixed sized allocations. "
http://t3x.org/klisp/index.html
" Kilo LISP is a small interpreter for purely symbolic LISP. Its source consists of 25K bytes of comprehensible code (20KB C, 5KB LISP) and it runs in 64K bytes of memory. There is also a version for CP/M that is written in T3X and runs in 48K bytes of memory. Despite its small size Kilo LISP offers:
lexical scoping tail call elimination macros quasiquotation variable-argument functions constant-space garbage collection image files keyboard interrupt handling
The code should compile with any C89 (K&R2/ANSI/ISO) C compiler. It even compiles with Turbo C 2.0 or SubC? and with K&R (pre-ANSI) C when using unproto. The CP/M version requires T3X/Z version 13 or later. The interpreter does not use any stdio functions and compiles to a static executable of
29K bytes using SubC on FreeBSD x86-64 31K bytes using T3X/Z on CP/M (Z80) 13K bytes using Turbo C on DOS (8086) 11K bytes using Mark Williams C on Coherent (80286) 512K bytes using GCC on FreeBSD x86-64 (LOL)
Language-wise, Kilo LISP is a LISP-1 using tail recursion to implement iteration. It is a bit like Scheme, but using more LISPy syntax and function names. See the manual for details. " -- http://t3x.org/klisp/index.html
" ACL2 formalizes only a subset of Common Lisp. It includes such familiar Lisp functions as cons, car and cdr for creating and manipulating list structures, various arithmetic primitives such as +, *, expt and <=, and intern and symbol-name for creating and manipulating symbols. Control primitives include cond, case and if, as well as function call, including recursion. New functions are defined with defun and macros with defmacro. See programming for a list of the Common Lisp primitives supported by ACL2.
ACL2 supports five of Common Lisp's datatypes:
ACL2 is a very small subset of full Common Lisp. ACL2 does not include the Common Lisp Object System (CLOS), higher order functions, circular structures, and other aspects of Common Lisp that are non-applicative. Roughly speaking, a language is applicative if it follows the rules of function application. For example, f(x) must be equal to f(x), which means, among other things, that the value of f must not be affected by ``global variables and the object x must not change over time. " -- [80]
https://github.com/eliben/bobscheme
" A tiny, embeddable language implemented in ANSI C
(= reverse (fn (lst) (let res nil) (while lst (= res (cons (car lst) res)) (= lst (cdr lst)) ) res ))
(= animals '("cat" "dog" "fox"))
(print (reverse animals)) ; => ("fox" "dog" "cat")
Overview
Supports numbers, symbols, strings, pairs, lambdas, macros Lexically scoped variables, closures Small memory usage within a fixed-sized memory region — no mallocs Simple mark and sweep garbage collector Easy to use C API Portable ANSI C — works on 32 and 64bit Concise — less than 800 sloc"
"tinyScheme, which is a BSD licensed, very small, very fast implementation of Scheme that can be compiled down into about a 20K executable if you know what you’re doing."
https://en.wikipedia.org/wiki/TinyScheme
http://tinyscheme.sourceforge.net/home.html
https://github.com/dchest/tinyscheme/blob/master/Manual.txt
https://en.wikibooks.org/wiki/Scheme_Programming/TinyScheme
https://www.reddit.com/r/scheme/comments/y8ni6/tinyscheme_vs_chibi_vs/
" TinyScheme? is used by the GNU Image Manipulation Program (GIMP) starting with version 2.4, released in 2007. GIMP previously used SIOD.[1]
TinyScheme? was used as the core of Direct Revenue's adware, making it the world's most widely distributed Scheme runtime.[2] " [81]
"...iOS and macOS sandboxing rules are written in a dialect of Scheme... It's TinyScheme?, apparently." -- [82] and [83]
https://github.com/zpl-c/tinyscheme/blob/master/source/scheme.c
" If you want a small, simple Scheme, Tinyscheme is very likely the best available option." -- kragensitaker, 2008
https://github.com/jart/sectorlisp https://justine.lol/sectorlisp/ https://justine.lol/sectorlisp2/
Links:
https://dl.acm.org/doi/abs/10.1145/3486606.3486783
https://ryansuchocki.github.io/microscheme/
"A functional programming language for the Arduino"
http://www.ulisp.com/ http://www.ulisp.com/show?3L
" Lisp for microcontrollers Lisp for Arduino, Adafruit M0/M4, Micro:bit, ESP8266/32, RISC-V, and Teensy 4.x boards. "
" Specification
The language is generally a subset of Common Lisp, and uLisp programs should also run under Common Lisp. But note that there is one namespace for functions and variables; in other words, you cannot use the same name for a function and a variable.
The 8/16 bit platforms provide lists, symbols, integers, characters, strings, (32-bit platforms), and streams.
In addition the 32-bit platforms provide floating-point numbers and arrays, including bit-arrays.
An integer is a sequence of digits, optionally prefixed with "+" or "-". On the 8/16-bit platforms integers can be between -32768 and 32767. On the 32-bit platforms integers can be between 2147483647 and -2147483648. You can enter integers in hexadecimal, octal, or binary with the notations #x2A, #o52, or #b101010, all of which represent 42.
On platforms with more than 2 Kbytes of RAM arbitrary user-defined symbol names are supported. Any sequence that isn't an integer can be used as a symbol; so, for example, 12a is a valid symbol. On platforms with only 2 Kbytes symbol names can have up to three characters consisting of a-z, 0-9, or $, *, or -.
uLisp provides tail-call optimization, so applications written using recursive functions can be as efficient as using iteration.
Strings can consist of an arbitrary sequence of ASCII characters. Strings can be unlimited length, and are automatically garbage-collected.
uLisp includes a mark and sweep garbage collector. Garbage collection takes under 1 msec on an Arduino Uno or under 3 msec on an Arduino Mega 2560 (see Performance).
uLisp also includes a simple program editor (see Using the program editor), a trace facility, and a pretty printer (see Debugging in uLisp). "
http://www.stripedgazelle.org/joey/dream.html https://web.archive.org/web/20110824051900/http://www.stripedgazelle.org/joey/dream.html
"an R4RS compatible Scheme written in x86 assembly. There are versions for Linux, Windows, and bootable floppy." [84]
http://common-lisp.net/project/movitz/ "Movitz is a small (almost) ANSI complete Common Lisp which runs on bare x86. The floppy image works well, but I had some issues with screen width. Asteroids and emacs are available on archive.org, but I haven't tested these yet." [85]
https://web.archive.org/web/20110415141820/http://hedgehog.oliotalo.fi/
https://www2.cs.sfu.ca/~cameron/smlisp/
http://sam.zoy.org/elk/ https://en.wikipedia.org/wiki/Extension_Language_Kit http://www-rn.informatik.uni-bremen.de/software/elk/
"embeddable...Scheme...for applications written in C or C++." [86]
https://common-lisp.net/project/ecl/
https://en.wikipedia.org/wiki/Lispkit_Lisp
https://carld.github.io/2017/06/20/lisp-in-less-than-200-lines-of-c.html
https://www.piumarta.com/software/maru/
"Maru is basically a lisp where some of the core functions like eval and apply are extensible from user code. There's basically a global map of types to evaluators and applicators, with some functions for you to register your own types and get the evaluation behavior you want." [87]
https://www.piumarta.com/software/lysp/
https://github.com/kovrik/scheme-in-kotlin
Lisp with full lexical closures in ~100 lines of Python
http://www.flownet.com/ron/lisp/l.py
Ben Lynn's implementation in about 100 lines of Haskell:
https://crypto.stanford.edu/~blynn/lambda/lisp.html
https://github.com/agrafix/micro-lisp
http://lambdaway.free.fr/workshop/?view=lambdacode
https://github.com/naver/lispe
https://github.com/naver/lispe/wiki/5.-Description-of-Functions,-Operators-and-Libraries
"a version of Lisp that is ultra-minimal but contains all the basic instructions of the language"
" An implementation of a full fledged Lisp interpreter with Data Structure, Pattern Programming and High level Functions with Lazy Evaluation à la Haskell. "
https://github.com/naver/lispe/wiki/6.16-Why-Lisp
https://github.com/coalton-lang/coalton
"a typed, ML-inspired Lisp built on top of Common Lisp by Robert Smith" -- [88]
https://lobste.rs/s/faafm5/bass_is_low_fidelity_lisp_dialect_for
http://www.paulgraham.com/bel.html https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=1595850613&
Comparison of Bel macros with fexprs:
https://github.com/racket/racket/pull/4179 https://github.com/racket/zuo https://github.com/mflatt/zuo
https://github.com/carp-lang/Carp
"A statically typed lisp, without a GC, for real-time applications."
R7RS Scheme compiler (targetting C) and interpreter
"We provide modern features and a stable system capable of generating fast native binaries.
Cheney on the MTA is used by Cyclone’s runtime to implement full tail recursion, continuations, and generational garbage collection. In addition, the Cheney on the MTA concept has been extended to allow execution of multiple native threads. An on-the-fly garbage collector is used to manage the second-generation heap and perform major collections without “stopping the world”."
Internals:
https://github.com/makuto/cakelisp
https://github.com/LuxLang/lux
(not FOSS)
https://www.researchgate.net/profile/Traian-Serbanuta/publication/32964580_A_Rewrite_Framework_for_Language_Definitions_and_for_Generation_of_Efficient_Interpreters/links/53e8fa620cf2dc24b3c7e4ad/A-Rewrite-Framework-for-Language-Definitions-and-for-Generation-of-Efficient-Interpreters.pdf https://www.researchgate.net/publication/32964851_A_K_Definition_of_Scheme also https://core.ac.uk/download/pdf/4820639.pdf http://www.cs.ecu.edu/hillsma/publications/meredith-hills-rosu-2007-scheme.pdf An Equational Specification for the Scheme Language A Formal Rewriting Logic Semantic Definition of Scheme
http://gambitscheme.org/ https://github.com/gambit/gambit
"Gambit is an efficient implementation of the Scheme programming language." -- [89]
" Targets C, JavaScript?, Python, and more. ...First released in 1988, Gambit is the third-oldest Scheme implementation still in use. Gambit's compiler and runtime have continually served as a platform for university research on compiler techniques and concurrent systems, allowing them to be refined over the decades to tackle the complex goal of speed, portability and concurrency in a single system. The result is a practical language implementation that has not abandoned simplicity in the pursuit of performance. A simple design leads to a system that does not only look good in benchmarks but is also pleasant to use. ...Gambit is mostly written in itself. The interpreter and debugger can run on almost any platform ...Gambit is one of the fastest dynamic language implementations. ... Millions of lightweight threads on a server. Messaging with higher-order channels. Erlang-like concurrency primitives with Scheme syntax. " -- http://gambitscheme.org/
https://woodrush.github.io/blog/lambdalisp.html
"LambdaLisp? is a Lisp interpreter written as an untyped lambda calculus term. The input and output text is encoded into closed lambda terms using the Mogensen-Scott encoding, so the entire computation process solely consists of the beta-reduction of lambda calculus terms.
...
Key features are:
Signed 32-bit integers Strings Closures, lexical scopes, and persistent bindings with let Object-oriented programming feature with class inheritance Reader macros with set-macro-character Access to the interpreter’s virtual heap memory with malloc, memread, and memwrite Show the call stack trace when an error is invoked Garbage collection during macro evaluation
Supported special forms and functions are:
defun, defmacro, lambda (&rest can be used) quote, atom, car, cdr, cons, eq +, -, *, /, mod, =, >, <, >=, <=, integerp read (reads Lisp expressions), print, format (supports ~a and ~%), write-to-string, intern, stringp let, let*, labels, setq, boundp progn, loop, block, return, return-from, if, cond, error list, append, reverse, length, position, mapcar make-hash-table, gethash (setf can be used) equal, and, or, not eval, apply set-macro-character, peek-char, read-char, ` , ,@ ' #\ carstr, cdrstr, str, string comparison with =, >, <, >=, <=, string concatenation with + defun-local, defglobal, type, macro malloc, memread, memwrite new, defclass, defmethod, ., field assignment by setf
...
The prelude Lisp code is a LambdaLisp? code that defines core functions such as defun and defmacro as macros. The prelude is hard-coded as a compressed string, which is decompressed by the string generator before being passed to the interpreter. The compression method is described later. The LambdaLisp? core code written in LambdaCraft? hard-codes only the essential and speed-critical functions. Other non-critical functions are defined through LambdaLisp? code in the prelude. A lot of the features including defun and defmacro which seem to look like keywords are in fact defined as macros.
" -- https://woodrush.github.io/blog/lambdalisp.html
so where is the core defined? I'm guessing it's in https://github.com/woodrush/lambdalisp/blob/main/src/lambdacraft.cl . That has stuff like:
(defun-lazy 2 (f x) (f (f x)))
https://docs.racket-lang.org/htdp-langs/index.html
BSL (Beginning Student Language), BSL+, ISL (Intermediate Student Language), ISL+, ASL (Advanced Student Language)
https://htdp.org/2018-01-06/Book/i1-2.html
http://doc.gnu-darwin.org/beginning/index.htm
https://github.com/KinaKnowledge/juno-lang https://github.com/KinaKnowledge/juno-lang/blob/main/doc/tutorial.md https://kinadata.com/welcome.html
, Lisp Machine Lisp, EuLisp?, and Kawa. -- http://replove.herokuapp.com/
http://axisofeval.blogspot.com/2010/08/no-more-minimal-early-lisps-pulleezz.html -- http://replove.herokuapp.com/
links:
"hygiene The principle that macros adopt the lexical context of their definition site (which is known in advance to the macro writer) rather than their calling site (which is not). This eliminates the risk of name collisions between identifiers introduced by the macro and identifiers at the calling site. (A problem also known as identifier capture.) One can override hygiene by using an unhygienic identifier. See also the hygiene explainer." [90]
on Lisp 'livecoding' in the REPL: [91]