notes-computer-programming-programmingLanguageDesign-prosAndCons-loper

http://www.loper-os.org/?p=1361

--

" I see this as a convincing argument for “silicon big government.” Move garbage collection, type checking, persistence of storage, and anything else which is unambiguously necessary in a modern computing system into hardware – the way graphics processing has been moved. Forget any hypothetical efficiency boost: I favor such a change for no other reason than the fact that cementing such basic mechanisms in silicon would force people to get it right. On the first try – the way real engineers are expected to. "

--

" Let’s examine the Scheme-79 chip: the only architecture I know of which was truly elegant inside and out. It eschewed the compromises of its better-known contemporary, the MIT Lisp Machine (and its later incarnations at LMI and Symbolics) – internally microcoded stack machines, whose foundational abstractions differed minimally from those found in today’s CPUs and VMs. The experimental S79 fetched and executed CONS cells directly – and was coupled to a continuously-operating hardware garbage collector. I will not describe the details of this timeless beauty here – the linked paper is eminently readable, and includes enough detail to replicate the project in its entirety. Anyone who truly wishes to understand what we have lost is highly encouraged to study the masterpiece. "

" Here is one noteworthy tidbit:

    “A more speculative approach for improving the performance of our interpreter is to optimize the use of the stack by exploiting the observation that the stack discipline has regularities which make many of the stack operations redundant. In the caller-saves convention (which is what the SCHEME-79 chip implements) the only reason why a register is pushed onto the stack is to protect its contents from being destroyed by the unpredictable uses of the register during the recursive evaluation of a subexpression. Therefore one source of redundant stack operations is that a register is saved even though the evaluation of the subexpression may not affect the contents of that register. If we could look ahead in time we could determine whether or not the register will retain its contents through the unknown evaluation. This is one standard kind of optimization done by compilers, but even a compiler cannot optimize all cases because the execution path of a program depends in general on the data being processed. However, instead of looking ahead, we can try to make the stack mechanism lazy in that it postpones pushing a register until its contents are about to be destroyed. The key idea is that each register has a state which indicates whether its contents are valuable. If such a valuable register is about to be assigned, it is at that moment pushed. In order to make this system work, each register which may be pushed has its own stack so that we can decouple the stack disciplines for each of the registers. Each register-stack combination can be thought of as having a state which encodes some of the history of previous operations. It is organized as a finite-state automaton which mediates between operation requests and the internal registers and stack. This automaton serves as an on-the-fly peephole optimizer, which recognizes certain patterns of operations within a small window in time and transforms them so as to reduce the actual number of stack operations performed.”
    “The SCHEME-79 Chip” (G. J. Sussman, J. Holloway, G. L. Steel, A. Bell)

What we are looking at is a trivial (in retrospect) method for entirely relieving compilers of the burden of stack discipline: a necessary first step towards relieving programmers of the burden of compilers. A systems programmer or electrical engineer educated in the present Dark Age might ask why we ought to demand relief from CPUs which force machine code to “drive stick” in register allocation and stack discipline. After all, have we not correctly entrusted these tasks to optimizing compilers? Should we not continue even further in this direction? This is precisely the notion I wish to attack. Relegating the task of optimization to a compiler permanently confines us to the dreary and bug-ridden world of static languages – or at the very least, makes liberation from the latter nontrivial. So long as most optimization takes place at compile time, builders of dynamic environments will be forced to choose between hobbled performance and the Byzantine hack of JIT compilation.

"

" The instruction set of a properly designed computer must be isomorphic to a minimal, elegant high-level programming language. This will eliminate the need for a complex compiler, enabling true reflectivity and introspection at every level. Once every bit of code running on the machine is subject to runtime inspection and modification by the operator, the rotting refuse heaps of accidental complexity we are accustomed to dealing with in software development will melt away. Self-modification will take its rightful place as a mainstream programming technique, rather than being confined to malware and Turing Tarpit sideshows "

" How much effort (of highly ingenious people, at that) is wasted, simply because one cannot press a Halt switch and display/modify the source code of everything currently running (or otherwise present) on a machine? "

" A bedrock abstraction level is found in every man-made system. No recoverable failure, no matter how catastrophic, will ever demand intelligent intervention below it. Repair at those depths consists purely of physical replacement. [1] No car crash, however brutal, will ever produce piles of loose protons and neutrons. When a Unix binary crashes, it might leave behind a core dump but never a “logic gate dump” and certainly not a “transistor dump.” Logic gates and transistors lie well below the bedrock abstraction level of any ordinary computer. [2]

...

The computers we now use are descended from 1980s children’s toys. Their level of bedrock abstraction is an exceedingly low one.

...

Witness, for instance, the fabled un-debuggability of multi-threaded programs on today’s architectures. It stems purely from the fact that truly atomic operations can only exist at the bedrock level

...

Yet, to a first approximation, solid state hardware will spend years doing exactly what it says on the box, until this hour comes. And when it has come, you can swap out the dead parts and be guaranteed correctness again.

...

I sometimes find myself wondering if the invention of the high-level compiler was a fundamental and grave (if perhaps inevitable) mistake, not unlike, say, leaded gasoline. No one seems to be talking about the down-side of the compiler as a technology – and there certainly is one. The development of clever compilers has allowed machine architectures to remain braindead. In fact, every generation of improvement in compiler technology has resulted in increasingly more braindead architectures, with bedrock abstraction levels ever less suited to human habitation.

...

I posit that a truly comprehensible programming environment – one forever and by design devoid of dark corners and mysterious, voodoo-encouraging subtle malfunctions – must obey this rule: the programmer is expected to inhabit the bedrock abstraction level. And thus, the latter must be habitable. "

--- http://www.loper-os.org/?p=13

" The Architecture of Symbolic Computers (Peter M. Kogge) is quite possibly the most useful resource I have come across in my quest thus far. If you are interested in Lisp Machine revival, non-von Neumann computation, or the dark arts of the low-level implementation of functional programming systems, you will not be disappointed. "

http://portal.acm.org/citation.cfm?id=542141