notes-computer-jasper-jasperNotes7

https://blog.heroku.com/archives/2014/3/11/node-habits

--

probs with Django ORM:

https://speakerdeck.com/alex/why-i-hate-the-django-orm

peewee ORM:

http://peewee.readthedocs.org/en/latest/peewee/upgrading.html#upgrading

http://peewee.readthedocs.org/en/latest/

--

http://reinout.vanrees.org/weblog/2013/08/21/programmatical-all-range.html

--

" Prehistorical Python: patterns past their prime - Lennart Regebro¶

Tags: django, djangocon Dicts

This works now:

>>> from collections import defaultdict >>> data = defaultdict(list) >>> data['key'].add(42)

It was added in python 2.5. Previously you’d do a manual check whether the key exists and create it if it misses. Sets

Sets are very useful. Sets contain unique values. Lookups are fast. Before you’d use a dictionary:

>>> d = {} >>> for each in list_of_things: ... d[each] = None >>> list_of_things = d.keys()

Now you’d use:

>>> list_of_things = set(list_of_things)

Sorting

You don’t need to turn a set into a list before sorting it. This works:

>>> something = set(...) >>> nicely_sorted = sorted(something)

Previously you’d do some_list.sort() and then turn it into a set. Sorting with cmp

This one is old::

>>> def compare(x, y): ... return cmp(x.something, y.something) >>> sorted(xxxx, cmp=compare)

New is to use a key. That gets you one call per item. The comparison function takes two items, so you get a whole lot of calls. Here’s the new:

>>> def get_key(x): ... return x.something >>> sorted(xxxx, key=get_key)

Conditional expressions

This one is very common!

This old one is hard to debug if blank_choice also evaluates to None:

>>> first_choice = include_blank and blank_choice or []

There’s a new syntax for conditional expressions:

>>> first_choice = blank_choice if include_blank else []

Constants and loops

Put constant calculations outside of the loop:

>>> const = 5 * a_var >>> result = 0 >>> for each in some_iterable: ... result += each * const

Someone suggested this as an old-dated pattern. You can put it inside the loop, python will detect that and work just as fast. He tried it out and it turns out to depend a lot on the kind of calculation, so just stick with the above example. String concatenation

Which of these is faster:

>>> .join(['some', 'string']) >>> 'some' + 'string'

It turns out that the first one, that most of us use because it is apparently faster, is actually slower! So just use +.

Where does that join come from then? Here. This is slow:

>>> result = >>> for text in make_lots_of_tests(): ... result += text

And this is fast:

>>> result = .join(make_lots_of_tests())

The reason is that in the first example, the result text is copied in memory over and over again.

So: use .join() only for joining lists. This also means that you effectively do what looks good. Nobody will concatenate lots of separate strings over several lines in their source code. You’d just use a list there. For just a few strings, just concatenate them. " -- http://reinout.vanrees.org/weblog/2013/05/17/prehistorical-python.html

---

" Introduce jsonb, a structured format for storing json.

The new format accepts exactly the same data as the json type. However, it is stored in a format that does not require reparsing the orgiginal text in order to process it, making it much more suitable for indexing and other operations. Insignificant whitespace is discarded, and the order of object keys is not preserved. Neither are duplicate object keys kept - the later value for a given key is the only one stored.

The new type has all the functions and operators that the json type has, with the exception of the json generation functions (to_json, json_agg etc.) and with identical semantics. In addition, there are operator classes for hash and btree indexing, and two classes for GIN indexing, that have no equivalent in the json type. "

---

http://stackoverflow.com/questions/6430448/why-doesnt-gcc-optimize-aaaaaa-to-aaaaaa

--

http://spin.atomicobject.com/2014/03/25/c-single-member-structs/

summary: (a) the C sizeof operator has weird behavior on arrays; it seems that if the array is on the stack, it gives the length of the array, but if it is on the heap, it gives the size of the pointer to the array. This can be dealt with either by using a typedef or by wrapping the array in a one-element struct, both of which cause sizeof to give the length of the array in any case. (b) Unwanted Type Coercion: say you have two types which are both ints but of different units, e.g. seconds and milliseconds. You may want the compiler to check to make sure you aren't adding these together. A typedef won't do this; if you create 'second' and 'millisecond' types which are both ints, then you add values of these types together, the compiler will look past the typedef, see they are really both ints, and auto-coerce them to allow them to be added. But you can wrap them in a one-element struct. Now no operations are defined and you must define e.g. addition manually.

--

Determinism Is Not Enough: Making Parallel Programs Reliable with Stable Multithreading

Junfeng Yang, Heming Cui, Jingyue Wu, Yang Tang, and Gang Hu, "Determinism Is Not Enough: Making Parallel Programs Reliable with Stable Multithreading", Communications of the ACM, Vol. 57 No. 3, Pages 58-69.

    We believe what makes multithreading hard is rather quantitative: multithreaded programs have too many schedules. The number of schedules for each input is already enormous because the parallel threads may interleave in many ways, depending on such factors as hardware timing and operating system scheduling. Aggregated over all inputs, the number is even greater. Finding a few schedules that trigger concurrency errors out of all enormously many schedules (so developers can prevent them) is like finding needles in a haystack. Although Deterministic Multi-Threading reduces schedules for each input, it may map each input to a different schedule, so the total set of schedules for all inputs remains enormous.
    We attacked this root cause by asking: are all the enormously many schedules necessary? Our study reveals that many real-world programs can use a small set of schedules to efficiently process a wide range of inputs. Leveraging this insight, we envision a new approach we call stable multithreading (StableMT) that reuses each schedule on a wide range of inputs, mapping all inputs to a dramatically reduced set of schedules. By vastly shrinking the haystack, it makes the needles much easier to find. By mapping many inputs to the same schedule, it stabilizes program behaviors against small input perturbations. 

The link above is to a publicly available pre-print of the article that appeared in the most recent CACM. The CACM article is a summary of work by Junfeng Yang's research group. Additional papers related to this research can be found at http://www.cs.columbia.edu/~junfeng/ By Allan McInnes? at 2014-02-27 20:10

Implementation Parallel/Distributed 22 comments other blogs 6441 reads

---

Flask claims that it has lots of hooks and signals and is well designed and easy to read (http://flask.pocoo.org/docs/becomingbig/). If that's true, maybe look at how Flask would look if written in Jasper (mb even port it?).

---

good conference for this sort of thing:

http://lambda-the-ultimate.org/node/4916

" Future of Programming workshop

The call for submissions is out. There will be two opportunities this first year to participate: at Strangeloop in September and at SPLASH in October. The call:

    We are holding two events. First, in partnership with the Emerging Languages Camp, FPW×ELC will be at Strange Loop on September 17 in St. Louis MO. Second, at SPLASH in Portland OR around Oct. 19 (pending approval).
    We want to build a community of researchers and practitioners exploring the frontiers of programming. We are looking for new ideas that could radically improve the practice of programming. Ideas too embryonic for an academic paper yet developed enough to demonstrate a prototype. Show us your stuff!
    FPW×ELC will present live demonstrations before an audience. The SPLASH event will be an intense, private writer’s workshop1,2. This process will be a chance to give and take both creative support and incisive criticism.
    Submissions will be 15 minute demo screencasts. You can select either or both of the events in your submission. The submission deadline is June 8 and notifications will be sent June 27. After the events participants will have until December 1 to revise their screencasts for archival publication on our website. The submission site is now open. For questions please see the FAQ or ask info@future-programming.org.
    Brought to you by Richard Gabriel, Alex Payne, and Jonathan Edwards. "

--

an april fools day joke but still a list of what people want from programming languages:

http://lambda-the-ultimate.org/node/4914

"..clean syntax, expressive type systems, good package managers and installers, free implementations.."

---

dog has some ideas similar to our high-level concept/types:

" to make it easier to identify people, he made people a basic data type that the language could recognize, just as other languages recognize strings of text or integers. ...

Then he created a simple syntax around these ideas that uses natural language (since the language deals with coordinating and communicating with people) and focused on a small set of very clear commands: ask, listen, notify, and compute. A sample line of code in a simple social news feed application reads, “LISTEN TO PEOPLE FROM mit VIA http FOR posts,” which would have the application monitor the Web for updates from a group of MIT-affiliated people. "

unfortunately there is no info on it available, arg: http://www.dog-lang.org/

--

mb just use {} as fn definer bc it looks like a block (or can we have non-fn blocks like in eg ruby with the weird scoping?)

mb just use $1 $2 etc for the magic vars (but we're not useing $ so this wouldnt work, right?)

--

some example how ruby/rails code looks clean:

https://github.com/jcs/lobsters/commit/025558f6ad98930aa7186ebfed764d9c8f6b762d :

" add support for /top and things like /top/3m and /top/2w

not linked to from anywhere yet

closes #95

    master

commit 025558f6ad98930aa7186ebfed764d9c8f6b762d 1 parent 3993e10 joshua stein jcs authored 5 days ago

Showing 2 changed files with 40 additions and 1 deletion. 36  app/controllers/home_controller.rb

@@ -127,6 +127,30 @@ def tagged

127 127

     end

128 128

   end

129 129

  	130 	

+ TOP_INTVS = { "d" => "Day", "w" => "Week", "m" => "Month", "y" => "Year" }

  	131 	

+ def top

  	132 	

+ @cur_url = "/top"

  	133 	

+ length = { :dur => 1, :intv => "Week" }

  	134 	

+

  	135 	

+ if m = params.to_s.match(/\A(\d+)([#{TOP_INTVS.keys.join}])\z/)

  	136 	

+ length = m[1].to_i

  	137 	

+ length = TOP_INTVS[m[2]]

  	138 	

+

  	139 	

+ @cur_url << "/#{params}"

  	140 	

+ end

  	141 	

+

  	142 	

+ @stories = find_stories({ :top => true, :length => length })

  	143 	

+

  	144 	

+ if length > 1

  	145 	

+ @heading = @title = "Top Stories of the Past #{length} " <<

  	146 	

+ length << "s"

  	147 	

+ else

  	148 	

+ @heading = @title = "Top Stories of the Past " << length

  	149 	

+ end

  	150 	

+

  	151 	

+ render :action => "index"

  	152 	

+ end

  	153 	

+

130 154

 private

131 155

   def find_stories(how = {})

132 156

     @page = how[:page] = 1

@@ -242,6 +266,16 @@ def _find_stories(how)

242 266

           break

243 267

         end

244 268

       end
  	269 	

+ elsif how && how

  	270 	

+ stories = stories.where("created_at >= (NOW() - INTERVAL " <<

  	271 	

+ "#{how} #{how.upcase})")

  	272 	

+ end

  	273 	

+

  	274 	

+ order = "hotness"

  	275 	

+ if how

how
  	276 	

+ order = "stories.created_at DESC"

  	277 	

+ elsif how

  	278 	

+ order = "(CAST(upvotes AS integer) - CAST(downvotes AS integer)) DESC"

245 279

     end

246 280

247 281

     stories = stories.includes(

@@ -251,7 +285,7 @@ def _find_stories(how)

251 285

     ).offset(

252 286

       (how[:page] - 1) * STORIES_PER_PAGE

253 287

     ).order(

254

how) ? "stories.created_at DESC" : "hotness"
  	288 	

+ order

255 289

     ).to_a

256 290

257 291

     show_more = false

5  config/routes.rb

@@ -17,6 +17,11 @@

     get "/hidden" => "home#hidden"
     get "/hidden/page/:page" => "home#hidden"

+ get "/top" => "home#top"

+ get "/top/page/:page" => "home#top"

+ get "/top/:length" => "home#top"

+ get "/top/:length/page/:page" => "home#top"

+

     get "/threads" => "comments#threads"
     get "/threads/:user" => "comments#threads"

"

---

pretty neat how in Python you can do this:

a = locals() a['b'] = 3 b == 3

--

http://brikis98.blogspot.com/2014/04/six-programming-paradigms-that-will.html

https://news.ycombinator.com/item?id=7565153

--

great stuff about bash:

http://robertmuth.blogspot.com/2012/08/better-bash-scripting-in-15-minutes.html

--

would be nice to have a useful octave-style 'who' in addition to a python-style local() (the difference being that locals() quickly gets crowded with a bunch of imported functions and module namespaces and stuff from various 'from X import *'s)

--

flohofwoe 1 day ago

link

Another vote for higher-level meta-build-systems like cmake, premake or scons (I'm using cmake because it has very simple cross-compilation support). My personal road to build-system nirwana looked like this, I'm sure this is fairly common:

Of course now I'm sorta locked-in to cmake, and setting up a cmake-based build-system can be complex and challenging as well, but the result is an easy to maintain cross-platform build system which also supports IDEs.

I general I'm having a lot less problems compiling cmake-based projects on my OSX and Windows machines then autoconf+Makefile-based projects.

[edit: formatting]

reply

greggman 1 day ago

link

I agree with this. As a cross platform (ie, Windows + OSX + Linux) person who enjoys Visual Studio (XCode less so) I need more than make.

My own experience is with gyp and ninja which is used by the Chromium team (http://martine.github.io/ninja/) which they use to build Windows, OSX, Linux, Android (and maybe iOS?)

Of course for personal projects I'll probably never notice the speed difference but for bigger ones Ninja is FAST.

reply

mpyne 1 day ago

link

CMake outputs to Ninja as well. It's the only way I know how to use Ninja in fact.

reply

Too 2 days ago

link

Might go a bit off topic but i have to bring this up since 9 out of 10 make tutorials on the internet do the same horrific mistake as you just did, 11 out of 10 code bases out in the wild as well.

In your make file example the .o files are just depending on the .cpp files, not the header files they include, the header files those included header files include and the files they include etc etc. This means nothing will be recompiled/relinked if a constant in a header file changes for example! Changed function signatures will give you cryptic linker errors with the standard solution "just try make clean first".

To solve this you can either manually update the make file every time any file changes the files it includes, which almost defeats the purpose of having an automatic build system. Or you can use automatic dependency generation by invoking your compiler with a special flag (-MMD for GCC), and suddenly make isn't as simple anymore as you laid it out to be. In conclusion your build tool must be aware of ALL inclusion rules as your compiler(preprocessor) has, or be given the information somehow. Maybe it's better to just use something designed for your particular toolchain that can come bundled with this knowledge?

reply

humanrebar 2 days ago

link

Right. Make is mostly a kludge around the nonexistent module system in C and C++.

It's so bad (specifically due to the way file preprocessing works), that you need to have large parts of a compiler to accurately determine what the dependencies of a source file are.

This is why a decent module system should be the top priority for C++17, though it doesn't look likely so far.

reply

conradev 2 days ago

link

Have you seen what Clang is doing[1]?

[1] http://clang.llvm.org/docs/Modules.html

reply

ArkyBeagle? 1 day ago

link

This is eminently possible with make-plus-GCC - add the line:

.depend :

        gcc -M *.c > .depend

then at the bottom:

source .depend

reply

dmytrish 1 day ago

link

Thank you for that useful option.

Just a nitpick: did you mean `include .depend`?

reply

ArkyBeagle? 1 day ago

link

Great nit to pick. I was typing , not testing. I believe you are 100% correct.

You may need to do a little finagling to bootstrap .depend when adding to an existing make file: echo "garp" > .depend; make .depend : may suffice.

reply

erikb 2 days ago

link

Is that solved with autotools? I'm not an expert, but I wonder why people talk about pure Make where a lot of bigger projects actually use autotools.

reply

endgame 2 days ago

link

Yes, it is. automake will generate makefiles that track dependencies as a side-effect of compilation ( http://www.gnu.org/software/automake/manual/automake.html#De... ). What this means is that whenever a source file is compiled it will update the dependencies for the .o file. It has to be done then because different platforms could have different header dependencies (if you're having fun with #define).

reply

joeld42 2 days ago

link

I think everyone goes through a phase where they try to find the perfect build tool, and then at least entertain the idea of writing one themselves.

Eventually, you grow out of it. There's a lot of build tools, each are better at some things than others. It's not that much grunt work to convert things from one to another (even very large projects). If your build tool is working for you, leave it alone. If it's getting in your way or slowing things down, try another one. Move on.

reply

shoo 2 days ago

link

I've been thinking a lot about build systems lately. I enjoy the discussion that this post has provoked. The post itself is weaker than it could have been, in that it does not stick to a single example when comparing build tools, and does not pin down any criteria for distinguishing between build tools.

If you are interested in a comparison of a few interesting build tools, please check out Neil Mitchell's "build system shootout" : https://github.com/ndmitchell/build-shootout . Neil is the author of the `Shake` build system. The shootout compares `Make`, `Ninja`, `Shake`, `tup` and `fabricate`.

Another possibly interesting build tool is `buck`, although it is primarily aimed at java / android development. See http://facebook.github.io/buck/ . There's a little discussion about `gerrit`'s move to `buck` here: http://www.infoq.com/news/2013/10/gerrit-buck .

Here's some questions I'd ask of a build system:

Many of these criteria are completely overkill for trivial build tasks, where you don't really need anything fancy.

reply

BoppreH? 2 days ago

link

I think one reason is because Make is built with Shell, which is always one step (and one letter) away from hell.

For example:

    clean:
         rm -rf *o hello

Did you really mean to erase all files and directories that end in "o"? Let's say it's just a typo and fix it: "*.o".

Now, are you sure it'll handle files with spaces in the name? What about dashes, brackets and asterisks? Accents? What if a directory ends in .o? Hidden files?

This specific case may support all of the above. But if it doesn't, what will happen? How long until you notice, and how long still to diagnose the problem?

Just like I prefer static strong typing when developing non-trivial projects, the build system should be more structured. I agree endless reinventing is tiring, but it may have some credit in this case.

reply

idlewan 2 days ago

link

I'd expect all developers using make to know about this and never have this problem thanks to one simple thing: sticking with sensible names (no spaces, no brackets, no stars and other special characters in the name - hello underscores!).

It's an easy rule.

    Just like I prefer static strong typing...

You probably don't use any special chars or spaces for identifiers in whatever the language you're programming in. This is just applying a similar rule to the files of your project.

reply

BoppreH? 2 days ago

link

For source code files I agree completely; but the build system will encounter other types of files that aren't so strict.

Maybe you downloaded something and it came with a bracket because that was in the page title. Or you copied a duplicated file and your system helpfully appended " (2)" at the end. Or there was an excel file updated by someone not so technical and this person didn't know they have to strip accents from the words in their native language (possibly losing meaning). Or someone saved their "Untitled Document.txt". Or you needed to include a date in the directory name. Or you are just human and didn't mean to break the build by pressing the biggest button on your keyboard when saving a file.

And remember "break the build" here is not "a red light flashes and you get an email". It means you get unknown behavior throughout the process, including security features and file removal.

Strict rules for source code file names are good because names usually bleed into the language itself. Python file names become identifiers when you import them. Identifiers in turn are strict because parsing is strict, and there are many good reasons for strict parsing in general purpose languages.

Lacking accent support in file names, as some very popular software do, is terrible. Lacking support for spaces is just atrocious.

I love shell, I use it daily for one-off tasks, but I don't think it's a good fit to manage the build system of a project.

reply

gyepi 2 days ago

link

1. Since make has builtin suffix rules, the Makefile could be simplified to:

    CXX=g++
    hello: main.o factorial.o hello.o
    clean:
        rm -rf *o hello

2. Shameless plug: he didn't mention redo [1], which is simpler than make and more reliable. The comparable redo scripts to the Makefile would be:

    cat <<EOF > @all.do
    redo hello
    EOF
    cat <<EOF > hello.do
    o='main.o factorial.o hello.o'
    redo-ifchange $o
    g++ $o -o $3
    EOF
    cat <<EOF > default.o.do
    redo-ifchange $2.cpp
    g++ -c $1 -o $3
    EOF
    cat <<EOF > @clean.do
    rm -rf *o hello
    EOF

[Edit: Note that these are heredoc examples showing how to create the do scripts.]

These are just shell scripts and can be extended as much as necesary. For instance, one can create a dependency on the compiler flags with these changes:

    cat <<EOF | install -m 0755 /dev/stdin cc
    #!/bin/sh
    g++ -c "\$@"
    EOF
  1. sed -i 's/^\(redo-ifchange.\+\)/\1 cc/' *.do
  2. sed -i 's}g++ -c}./cc}' *.do

sed calls could be combined; separated here for readablility.

[1] https://github.com/gyepisam/redux

reply

JoshTriplett? 2 days ago

link

CXX=g++ isn't necessary either; make already knows about $(CXX) and how to link C++ programs. Also, I think you wanted .o, not o.

And compared to that Makefile, the redo scripts you list don't seem simpler at all. I've seen reasonably compelling arguments for redo, but that wasn't one.

reply

gyepi 2 days ago

link

> CXX=g++ isn't necessary either; make already knows about $(CXX) and how to link C++ programs.

You're right, of course.

> Also, I think you wanted .o, not o.

I would, yes, but I copied the Makefile ;)

Should have been clearer; I meant that redo is simpler (and more reliable) than make.

For simple projects, redo scripts are a bit longer. However, as the projects grow, the redo scripts reach an asymptote whereas Makefiles don't. The only way to reduce the growth in make is to add functions and implicit rules which get ugly real fast.

reply

gcv 2 days ago

link

redo is pretty cool, but I ran into trouble with apenwarr's implementation (https://github.com/apenwarr/redo, see https://groups.google.com/d/msg/redo-list/GL5z8eEqT90/tk_vLZ...) with OS X Mavericks. I have no experience with the alternative implementation at https://github.com/gyepisam/redux, since it came out after I reimplemented the build system in question with CMake.

In general, I found CMake quite useable for my needs, and quite clean. It also required less build system code than redo. CMake fits quite nicely into a (C or C++) project which consists of many binaries and libraries which can depend on each other.

reply

pekk 2 days ago

link

redo might be simpler and more reliable, but shell isn't. And redo is encouraging even more work to be done in shell. Additionally, the redo version is more verbose and harder to read. While fancier tasks will make's version look horrible relatively quickly, they won't make redo's version look any better.

reply

gyepi 2 days ago

link

> redo might be simpler and more reliable, but shell isn't.

Not quite sure what you mean here. The scripts don't do anything complicated and redo catches errors that could occur.

As for readability, etc, I suppose it's relative. Simple makefiles do read very nicely. Unfortunately, they aren't always simple and hairy makefiles are just horrible to write, read and maintain. I've had no such problems with do scripts.

reply

malkia 2 days ago

link

To this day I still don't understand redo (I'm just staring at it, and don't get anything) - haven't really read the internals.

With make it was easier for me to grasp the idea (or maybe I was simply 20 years younger then).

reply

pmahoney 2 days ago

link

I think the big difference between redo and make is that make requires knowledge of dependencies up front, and this is sometimes tricky to get right.

"as you can see in default.o.do, you can declare a dependency after building the program. In C, you get your best dependency information by trying to actually build, since that's how you find out which headers you need. redo is based on the following simple insight: you don't actually care what the dependencies are before you build the target; if the target doesn't exist, you obviously need to build it. Then, the build script itself can provide the dependency information however it wants; unlike in make, you don't need a special dependency syntax at all. You can even declare some of your dependencies after building, which makes C-style autodependencies much simpler."

https://github.com/apenwarr/redo

reply

gyepi 2 days ago

link

It's actually quite simple. You write a short shell script to produce the output you need and redo handles the dependencies.

For example, the shell script named "banana.x.do" is expected to produce the content for the file named "banana.x".

When you say

  1. redo banana.x

redo invokes banana.x.do with the command:

    sh -x banana.x.do banana.x banana XXX > ZZZ

so banana.x.do is invoked with three arguments and its output is redirected to a file.

   $1 denotes the target file
   $2 denotes the target file without its extension
   $3 is a temp file: XXX, in this case.

banana.x.do is expected to either produce output in $3 or write to stdout, but not both. If there are no failures redo will chose the correct one, rename the output to banana.x and update the dependency database.

If banana.x depends on grape.y, you add the line

    redo-ifchange grape.y

to the banana.x.do, creating a dependency. redo will rebuild grape.y (recursively) when necessary.

The only other commands I haven't mentioned are init and redo-ifcreate, which are obvious and rarely used, respectively.

That's it.

reply

GrinningFool? 2 days ago

link

Sorry, but that doesn't appear simpler to me...

reply

rafekett 2 days ago

link

the last 10 years in build tools has felt like 1 step forward, two steps back. i like being able to write tasks in any language other than Makefile. however, it seems like many of the new popular options (cake, grunt, etc.) don't do what, to me, is Make's real purpose: resolve dependencies and only rebuild what's necessary. new task runners have either eliminated or pigeonholed the (typically one-to-one in makeland) correspondence between tasks and files, meaning the build system can't be intelligent about what tasks to run and which to not.

computers are fast enough that this doesn't often bother me anymore, but i've run across some huge Rakefiles that could benefit from a rewrite in Make.

reply

chrismonsanto 2 days ago

link

> however, it seems like many of the new popular options (cake, grunt, etc.) don't do what, to me, is Make's real purpose: resolve dependencies and only rebuild what's necessary.

You might like tup[1]. Its killer feature is that it automatically determines file-based dependencies by tracking reads and writes (using a FUSE filesystem). It has an extreme emphasis on correct, repeatable builds, and is very fast. Other stuff:

I've tried every build system out there. For Unix-y file-based build tasks, tup is, by far, the best. I don't know why it isn't more well known.

[1]: http://gittup.org/tup/index.html

reply

Rusky 2 days ago

link

+1 for the link. I hadn't seen tup before, and I really like how it feels like make in its simplicity, but is more explicit about the graph inputs and outputs, cares more about the output (e.g. deleting old files), and watches the file system for changes.

reply

rkrzr 2 days ago

link

I was already sold on tup after reading the first paragraph comparing it to make[1]:

"This page compares make to tup. This page is a little biased because tup is so fast. How fast? This one time a beam of light was flying through the vacuum of space at the speed of light and then tup went by and was like "Yo beam of light, you need a lift?" cuz tup was going so fast it thought the beam of light had a flat tire and was stuck. True story. Anyway, feel free to run your own comparisons if you don't believe me and my (true) story."

[1]: http://gittup.org/tup/make_vs_tup.html

reply

andrewflnr 2 days ago

link

My favorite is the "Tup vs Mordor" benchmark.

reply

malkia 2 days ago

link

I think MSBuild does something like this - there is a filetracker that tracks what files were read/written while running a tool, and writes that information in a file. I think you can even install your own file-notification-changes .dll to track changes your way (maybe file system that is not supported, or something else).

Similar to what lsof, procmon (windows) do.

reply

Touche 2 days ago

link

Seconded on tup's greatness. Go through the example to see what all it's doing:

http://gittup.org/tup/ex_a_first_tupfile.html

reply

mhw 2 days ago

link

I agree completely, and I think the blame rests with Ant and Java. Java's dependency management was painful enough to deal with in 'make' that Ant was built to support building Java projects. But in doing so the authors threw away the explicit file dependencies that made 'make' so powerful in the first place. Instead it got people to think in terms of a graph of 'tasks', each of which could either figure out its own dependency management, or more commonly ignore them completely. Most tools that followed seem to have gone down the 'graph of tasks' avenue, with 'graph of file dependencies' as an additional mechanism if you're lucky.

The huge Rakefiles you've seen could possibly have simply benefited from a rewrite in Rake. Rake has 'file' tasks which implement the file dependencies of 'make' but for some reason most users of Rake seem to ignore them completely.

reply

ChuckMcM? 2 days ago

link

It can be worse than that, one build system I looked at built everything every time. Why? Because "Computers are fast enough that trying to figure out exactly what needs to be rebuilt is an anachronism, this can rebuild everything in the time it took that crufty old system to figure out what it actually had to build."

I've given up trying to educate folks, I just make a note to check in with them, 6 months to a year later, to see if they are still building everything.

reply

retrogradeorbit 1 day ago

link

Yes. And then combine that dependency tree resolution with make -j. Simple and powerful.

reply

msluyter 2 days ago

link

Nothing against make, but I've found that it feels really nice when the majority of your toolset uses the same language. This is what I liked about Rails. Rails is ruby. Bundler is ruby. Rake is ruby. It's all ruby, which allows for a certain synergy, streamlined feel, and less cognitive overhead. I don't blame the js folks for attempting something similar.

reply

Cyranix 2 days ago

link

Agreed. Mixing languages is fine when necessary, but a single language is usually preferable to me. My dev team has been using Grunt in projects thus far and have been pleased with Gulp in small experiments.

reply

luckydude 1 day ago

link

I'm 52 years old. I've had this discussion with dmr, srk, maybe with wnj.

All I know is for years, decades, I carried around the source to some simplistic make. I hate GNU make, I hate some of the unix makes. I loved the simple make.

The beauty of make is it just spelled out what you needed to do. Every darn time make tried to get clever it just made it worse. It seemed like it would be better and then it was not.

Make is the ultimate less is more. Just use it and be happy.

reply

ThePhysicist? 2 days ago

link

For me, the only justification for using a language-specific build tool (e.g. grunt, rake, paver, ...) is when you actually want to exchange data with a library / program written in that language. On the other hand, you could probably accomplish the same effect using environment variables, with the upside of having a cleaner interface.

For those that are curious which build tools exist for Python, here's an (incomplete) list:

reply

SoftwareMaven? 2 days ago

link

You forgot buildout[1], which is probably more than a build system, perhaps putting a toe into the configuration management world.

1. http://www.buildout.org/

Documentation can be challenging to find, and it isn't the most actively developed project in the world, but what it does, it does pretty well (including supporting more than python dependencies).

reply

ThePhysicist? 2 days ago

link

Thanks for posting the link! I hesitated including configuration management tools since the use case is not the same. There's a lot of interesting stuff going on there though: With Saltstack and Ansible we have two serious "chef" contenders for Python now.

reply

asb 2 days ago

link

I've recently been playing with ninja, which does a good job of not being 'just another make' http://martine.github.io/ninja/. To quote their website, "Where other build systems are high-level languages Ninja aims to be an assembler.". It's used as a backend for GYP (Google Chromium) and is supported by CMake as well. I've had good success generating the files manually using something like ninja_syntax.py: https://github.com/martine/ninja/blob/master/misc/ninja_synt....

I also note that google are working on a successor to GYP, GN which targets Ninja http://code.google.com/p/chromium/wiki/gn.

reply

evmar 2 days ago

link

Thanks for the plug! In line with the original post, I'll add that the Ninja manual has a section where we try to convince you to not use Ninja and instead use a more common build system: http://martine.github.io/ninja/manual.html#_using_ninja_for_...

reply

sheetjs 2 days ago

link

One big advantage of vanilla Make is the community. There are some very nice tools that work well with make (such as https://github.com/mbostock/smash).

reply

ztratar 2 days ago

link

I love documentation that has humor, as long as it doesn't get in the way.

What's special about the Make community as opposed to the Grunt or Gulp communities?

reply

pekk 1 day ago

link

For that matter, what's special about the Grunt or Gulp communities?

reply

Xorlev 2 days ago

link

This post rather misses that while Make is simple, making Make do all the things we're used to (e.g. Java dependency management) not as simple.

I'd like to think people have decided that it's easier to replicate the task part of Makefiles onto their environment as the simpler alternative to making dependency management and various other language-specific tasks available to make.

reply

drawkbox 2 days ago

link

At least we are moving the direction of Grunt/Gulp rather than a maven sort of direction. Many lives lost to maven, somewhat of a Vietnam of build tools. You might think you are a Java developer with it but truly you are a maven servant.

reply

Zelphyr 2 days ago

link

Never underestimate a young developer's need to reinvent reinventing the wheel.

reply

daemin 1 day ago

link

I think the main problem with these articles is that the examples given are exceedingly simplistic, and hence in no way represent real world build systems. It's very easy to have a build system look nice and clean for trivial examples, when it breaks down is when the software it builds gets more complicated and the number of hacks and extra code is added making the build system into a big mess.

reply

apples2apples 2 days ago

link

Misses the fundamental point that Make is broken for so many things. To begin with you have to have a single target for each file produced. Generating all the targets to get around this is a nightmare that results in unreadable debug messages and horribly unpredictable call paths.

nix tried to solve much of this, but I agree it can't compete with the bazillion other options.

reply

webjprgm 2 days ago

link

It does not miss it, just ignores it. The author states that there are lots of things we can improve but the point is that we have too many variations on the theme without converging to a solution that has few (or no) dependencies and comes with built-in build knowledge and the ability to discover what you want rather than make you declare it.

Such a tool should be: - Zero (or few) dependencies. Likely written in plain C (or C++, D, Rust) and compiled to distribute in binary form. - Cross-platform - Support any mix of project languages and build tasks. - Recognizes standard folder hierarchies for popular projects. - Easy enough to learn. Not overly verbose (looking at you, XML). Similar to Make if possible.

Examples of the auto-discovery: It can find "src", "inc", and "lib" directories then look inside and see .h files then make some educated guesses to build the dependency tree of header and source files (even with mix of C and C++). Or it could see a Rails app and figure out to invoke the right Rake commands, perhaps checking for the presence of an asset pipeline etc. Or a Node.js project. It could check for GIT or SVN and make sure any sub-modules have been checked out.

reply

danielweber 2 days ago

link

The dependencies thing is a killer. I remember a Windows developer co-worker insisting that everyone had the .NET runtime installed, and after shipping it turned out that most of our customers didn't have it installed, to which he finally said, "well, I always have it installed." (To be fair, I should have pressed him harder, and I did ask the question twice, but because I'd never built against the runtime I was unprepared for any challenge.)

Almost every new project I download starts with a sad, manual, and demoralizing installation of a bunch of third-party stuff that you have to google to find out what's missing. And it's not educational at all, because in a few years all these tools will now be obsolete.

(The best project I ever encountered was the Stripe CTF, which almost always used just one command to install a complete working copy of everything you needed and didn't have. I'm still impressed with that.)

reply

gyepi 2 days ago

link

Some of these requirements should be built into any build tool. However, most can be added easily enough:

For instance, redux [1] is written in Go (not compiled for binary distribution, but I could add that), is cross platform, supports any mix of languages and tasks, is very easy to learn.

It uses shell scripts to create targets so everything is scriptable. Stuff like recognizing standard folder hierarchies and auto-discovery can be added with small scripts or tools. It can be as simple as you want or as complex as you need.

reply

exogen 2 days ago

link

Could you post an example of what you mean by the single target/file limitation? As stated I can't tell how implicit rules or a rule to build an entire directory wouldn't be a solution, but maybe I'm not understanding the problem.

reply

apples2apples 2 days ago

link

Sure, consider a compiler that produces an (foo.o) object file and an annotation (foo.a). Now if a target requires both foo.o and foo.a you have to create two targets on them (even though its really one command).

You can do implicit rules which requires a very verbose makefile, which is what automake and other make generation tools do. God help you figure out what went wrong.

If you make people go to a directory approach you've now imposed a new structure on their code. One reason for the multitude of packages is each one matches their target community better.

reply

spc476 1 day ago

link

I was able to do that with GNU Make. Granted, the syntax is a bit ... odd, but it was doable.

    ASM = A09/a09
    %.o %.l : %.a
            $(ASM) -o $(*F).o -l $(*F).l $(*F).a
    clean:
            /bin/rm -rf *.o *.l
    foo : disasm.l
    bar : disasm.o
    baz : disasm.l disasm.o

The target baz has both a .l and a .o, both of which are produced in one command. The line that begins with "%.o" starts an implicit rule, which loosely states, in English: "to produce a .o file, or a .l file, run the following ...". $(*F) is a GNUism that maps to the filename of the source (directory part, if any, is stripped). This works. I tested all three targets (foo, bar, baz) with a "make clean" between each one.

(and for the really curious, a09 is a 6809 assembler; disasm.a is a 6809 diassembler, written in 6809; binary is a 2K relocatable library)

reply

taeric 2 days ago

link

I don't understand.

If both the .o and the .a are created from another file, wouldn't it be safe to just rely on either one of them? (Obviously, you will need to be consistent in choice.)

That is, if every time a .o is created, so is the .a, then where is the difficulty? Just rely on one (the .o). I could conceive of a scenario where the .a updated but the .o didn't, but I don't know of any tools that really work that way right now. I thought the norm was to at least touch all output files.

Further, if that is happening, seems you are safest having two rules, anyway.

reply

Someone 2 days ago

link

Say you have a long build process and do a quick semi-clean by hand to speed up the next buld (not the best idea, but not inconceivable), deleting the .a files, but fogetting to delete the matching .o files. Then, your next build will produce some novel (to you) error messages that may take long to clean up. Worse, the command building on the .o and the .a might just say "OK, I'm given a .o without a .a; fine, then I'll do a slightly different thing"

Also, having two rules means duplicating a command:

    foo: foo.a
        baz $(BAZ_OPTIONS) foo
    foo: foo.o
        baz $(BAZ_OPTIONS) foo

That's bad from a maintenance perspective.

reply

Too 2 days ago

link

Invoking the command twice can also screw up things if you run parallel build, which you should always do! Not only to speed things up it's also a good way to verify that your make file actually is correct. If your make file doesn't work in parallel build it is broken, in the same way as C code that breaks at -O2 and above due to reliance on undefined behavior.

The solution to the multiple target problem is using the built in magic .INTERMEDIATE rule which isn't entirely obvious how it works.

reply

taeric 1 day ago

link

Ok, that makes sense. I'm tempted to rattle the knee jerk, "don't go deleting random crap," but I realize that is a hollow response.

I'm curious how .INTERMEDIATE helps in this case. I did find this link[1], which was a rather fun read down how one might go about solving this, along with all of the inherent problems.

[1] http://www.gnu.org/software/automake/manual/html_node/Multip...

reply

mzs 2 days ago

link

Or if you don't like taeric's suggestion you can just touch a .ao file after the line that creates the .a and .o files and have your further rule(s) depend on that .ao file. Have .ao depend on your source. If you still want to be able to type stuff like 'make foo.a' instead of 'make foo.ao' and have it work, then you can make a rule where .a depends on .ao and all the rule does is touch the .a file. Create the same rule for the .o too.

reply

nshepperd 1 day ago

link

Huh? Doesn't this work:

    all: copied o a
    source:
    	echo "message" > source
    foo.o foo.a: source
    	(echo -n ".o: "; cat source) > foo.o
    	(echo -n ".a: "; cat source) > foo.a
    o: foo.o
    	cat foo.o > o
    a: foo.a
    	cat foo.a > a
    copied: foo.o foo.a
    	cat foo.o foo.a > copied

The third rule simulates a compiler producing two outputs. Now if foo.o changes, both "copied" and "o" will be updated, and if foo.a changes, both "copied" and "a" will be updated. (And if either foo.o or foo.a are deleted, the compiler will be rerun, as will everything depending on foo.a or foo.o.)

This is gnu make 4.0.

reply

hooya 2 days ago

link

> To begin with you have to have a single target for each file produced.

Try this next time (only the pertinent lines are included):

  SOURCES=$(wildcard $(SRCDIR)/*.erl)
  OBJECTS=$(addprefix $(OBJDIR)/, $(notdir $(SOURCES:.erl=.beam)))
  DEPS = $(addprefix $(DEPDIR)/, $(notdir $(SOURCES:.erl=.Pbeam))) $(addprefix $(DEPDIR)/, $(notdir $(TEMPLATES:.dtl=.Pbeam)))
  -include $(DEPS)
  1. define a suffix rule for .erl -> .beam $(OBJDIR)/%.beam: $(SRCDIR)/%.erl
$(OBJDIR)
	$(ERLC) $(ERLCFLAGS) -o $(OBJDIR) $<
  1. see this: http://www.gnu.org/software/make/manual/html_node/Pattern-Match.html $(DEPDIR)/%.Pbeam: $(SRCDIR)/%.erl
$(DEPDIR)
  	$(ERLC) -MF $@ -MT $(OBJDIR)/$*.beam $(ERLCFLAGS) $<
  1. the
pipe operator, defining an order only prerequisite. Meaning
  1. that the $(OBJDIR) target should be existent (instead of more recent)
  2. in order to build the current target $(OBJECTS):
$(OBJDIR)
  $(OBJDIR):
  	test -d $(OBJDIR) || mkdir $(OBJDIR)
  $(DEPDIR):
  	test -d $(DEPDIR) || mkdir $(DEPDIR)

I've been using a makefile about 40 lines long and I've never needed to update the makefile as i've added source files. Same makefile (with minor tweaks) works across Erlang, C++, ErlyDTL? and other compile-time templates and what have you. Also does automagic dependencies very nicely.

> Generating all the targets to get around this is a nightmare that results in unreadable debug messages and horribly unpredictable call paths.

If you think of Makefiles as a series of call paths, you're going to have a bad time. It's a dependency graph. You define rules for going from one node to the next and let Make figure out how to walk the graph.

reply

demallien 2 days ago

link

Heh. The one and only time that I ever wrote a parser in my professional career was for a build tool. In my defence, at the time I didn't know much about command line tools, and had only really programmed in IDEs. So when the new project was to be compiled on the command line, I quickly discovered that maintaining dependencies, changing targets and doing all the other things that a build system generally does by hand quickly gets old. Not knowing that autotools, cmake, ant, and about a bajillion other tools already existed to do just this, I wrote my own language, with a parser in ruby, no less :D

I have since repented. I find autotools (with occasionally a script of [ruby

pythonperl] to handle something that would otherwise be tricky to do in make or m4, which is then called by the makefile) works a treat. Just don't try to do anything tricky in the auto tool files - as I said, boot anything exotic out to a separate tool.

Also, any discussion of build tools without also discussing package management is but half a discussion.

reply

malkia 2 days ago

link

Make is the "assembly" language for build systems. Qt's qmake/cmake target it, and the output produced is horrible, but then using "make -j8" or qt's own "jom.exe -j8" as replacement for the underperforming nmake.exe and you are all set.

reply

eponeponepon 2 days ago

link

I'm unreasonably fond of Ant - there's plenty of scope for pointless clever-dickery, and there are days where that's all that keeps me going!

Nice to see it mentioned in a context other than "oh god what a mess"... even though, in fairness, many aspects of it are a complete dog's dinner.

reply

revscat 2 days ago

link

Ant is a Turing-complete language in XML.

That is horrifying.

It is bloated, difficult to read, tends towards duplication. It also doesn't do dependency management all that well, doesn't cache build results (so it does a complete rebuild every time), and is difficult to extend.

Not an Ant fan. I have used both Rake and Gradle successfully, and have been much happier with each. Their scripts tend to be (much) more compact, easier to read, and less prone to duplication.

reply

ndrake 2 days ago

link

I agree with a lot of your points, but can you explain the part about it doing a complete rebuild every time? It doesn't do that for me (unless I specifically tell it to).

reply

revscat 2 days ago

link

My apologies: I just wrote a basic HelloWorld?.java and a build.xml to go with it, and it looks like it doesn't recompile the class unless there is a change to the source .java file. So I was mistaken about that.

Wonder what Gradle's caching, then?

reply

vorg 2 days ago

link

> Wonder what Gradle's caching, then?

Let's hope it's catching another scripting language or two in its upcoming version 2 because having Groovy as the only option does it no favors.

reply

josephschmoe 2 days ago

link

I've never felt hindered by Gradle.

Hindered by the fact I can't add an arbitrary github repo through Gradle? Yes. That seems like it should be solvable though...

reply

twic 2 days ago

link

Add an arbitrary GitHub? repo to what? In what capacity? You mean as a source dependency, an artefact repository, what?

reply

-- https://news.ycombinator.com/item?id=7621747

--

---

http://codeofrob.com/entries/you-have-ruined-javascript.html

summary: this guy complains that with first-order functions, there is no need for angular to have all of service, provider, factory, and that that's too abstract for him. lots of commentors reply that angular is great and he is dumb.

some useful comments:

"

Avatar Nick Roth • 17 hours ago

If your critique of Angular is that it needs all these patterns to operate, I don't think you understood the answer correctly. It may not have been clear from the SO answer or especially from that code example but the author is showing how services and factories are just providers underneath the covers. The author is showing different ways to define what are essentially equivalent components. Only one of them would be needed in an actual application.

30 • Reply • Share ›

    Avatar
    acjohnson55 Nick Roth • 15 hours ago
    The problem with Angular is that all 3 exist in the first place.
    8
    •
    Reply
    •
    Share ›
            −
        Avatar
        Nick Roth acjohnson55 • 15 hours ago
        They all exist because they each have a different use and a developer should select the one that fits best what's needed out of that component (read: use the right tool for the job). The SO example is clearly a trivial use case designed to show the relationship between the different types. I'm not quite a fan of everything in Angular and Angular isn't always the right tool for the job. But that doesn't mean I'd rather make fun of it or the developers that use it instead of trying understand it and why people find it useful.

"

"

Avatar Nathan • 8 hours ago

Rob, I think you've confused "over-wrought enterprise 'solution' with layers of unnecessary complexity and abstraction" with "lackluster documentation written mostly by architects full of jargon that's confusing to newcomers". AngularJS? surely isn't perfect, but to suggest jQuery was the zenith of where we're gone with front end is throwing the baby out with the bathwater.

For instance, you say:

> What the actual fuck is this? I read this as "in order to do a hello world, you must first create a hello world service to create the hello world factory to create the hello world sercice so you can print hello world on the screen."

I feel that your negative feelings about this are due to a misreading and unwillingness to understand the principles Angular is built from, namely ease of testability, not from unneeded complexity introduced by the Angular devs for people who need "hand-holding", whatever that means. A "Hello World" in Angular would be something more like this: https://angularjs.org/#the-bas... . No jargon, just note the two-way data binding. Hey, cool. Walk before you can run and so on.

> I'd write a plain old JS equivalent but trying to wrap my head around all of the indirection in the above example is making me want to crawl under a desk and bang my head on the floor until the brainmeats come out so I don't have to subject myself to this madness any further.

Instead of having such a visceral reaction, let's hear what you consider to be a better alternative. It's easy to criticize the work of others and not put yourself out there to potentially be criticized. If you can't be troubled to make an argument about why Angular is bad other than "I don't like it, meh, it reminds me of my enterprise days, oh by the way I'm not going to suggest anything better but I will vaguely allude to NPM modules as the golden age of JS" then don't expect us to take you seriously.

8 • Reply • Share ›

    Avatar

"

"

Honestly, I didn't even feel like provider, factory, or service was that hard either. Each one is just a shortcut for a simplified version of the one before it. Just take a peek at the source:

I wonder how many people realize that a factory is one single line of code.

function factory(name, factoryFn) { return provider(name, { $get: factoryFn }); }

If you look at line 674 you can even see a "value" function, and it's literally just a factory that always returns a single value.

function value(name, val) { return factory(name, valueFn(val)); }

Nobody ever talks about that one though since it's so simple.

I don't know. I guess I just realized early on that they were all closely related. They're really just one big provider utility to me, with a few shortcuts for convenience. Maybe a lot of this confusion would have been mitigated if Google didn't talk about all three of them so much, as if they were so different from one another.

• Reply • Share ›

"

"

Avatar Neil Barnwell • 18 hours ago

I agree it does all seem over complicated when "newing-up an object" would appear to suffice. Are you concerned that the service/factory/provider stuff exists in Angular, or that people are using it, or that some people just tend to write that sort of stuff blinding thinking that more layers of abstraction must be better? I like your thinking, generally - smaller, simpler, more nimble code that does what is needed without being swallowed by the code that's wrangling the over-complex framework-du-jour.

1 • Reply • Share ›

    Avatar
    Stu Neil Barnwell • 17 hours ago
    There do seem to be a lot of concepts to learn to use it efficiently ... I had deja vu to when I tried Android, and some document like "basic concepts" was pretty long.
    This is the new challenge: to build powerful APIs that don't have too many concepts in them. All of this service/provider etc it does have that horrible feel of J2EE ..
    •
    Reply
    •
    Share ›
            −
        Avatar
        Chris Warburton Stu • 16 hours ago
        The "horrible feel of J2EE" is, IMHO, mostly due to Java's lack of first-class classes and first-class functions. This is terribly stifling, since a) it forces everything to be first-order and b) it forces us to reify all of the intermediate steps.
        This is bad, but it's a consequence of the language. Seeing it infesting languages like JS is worse, since these languages allow higher-order programming, and don't even have classes to worry about (if you do decide to implement your own class-like dynamic dispatch mechanism (why??) then at least the resulting functions *will* be first-class citizens).
        Not only are these abstractions inappropriate for higher-order languages like JS, they're also completely unnecessary. Just looking at their implementations shows you that it's all just functions! Why treat them as anything else? Questions like "Should I use a Provider or a Factory?", etc. can be rephrased as "What should my function do?". That, right there, is the right question to be asking. The only things that matter are that /the answer will be different for different problems/ and /you need to actually implement the function/, not just hide the real problem under layers of abstractions.
        •
        Reply
        •
        Share ›

"

"

Alex Ford Max Pollack • 5 hours ago

If creating bindings between handlers and controls DID constitute "logic in the presentation" then so do "id" attributes that you add to the markup solely for the purpose of having a library like JQuery be able to get reference to it. It's functionally identical to an Angular directive. People fail to realize that all Angular does is extend HTML while adhering to the exact same standards that people are already doing every day.

This:

<script> $(document).ready(function () { $('#myBtn').click(function (e) { alert('did stuff'); }); }); </script> <button id="myBtn">Click Me</button>

...isn't very different from this:

<script> function myController($scope) { $scope.doStuff = alert.bind(window, 'did stuff'); } </script> <button data-ng-click="doStuff()">Click Me</button>

If you're complaint is about putting stuff in markup to act as bindings for code that later comes and actually creates the binding then the "id" attribute violates that concern. Both of those examples are DRASTICALLY different from this:

<script> function doStuff() { alert('did stuff'); } </script> <button onclick="doStuff()">Click Me</button>

...because the attribute itself is what triggers the logic to create the event handler (and let's not forget about the global function we pretty much HAD to create). I have no idea why so many people think they are the same thing.

• Reply • Share ›

"

"

skore 16 hours ago

link

I have no idea why the author tries to judge a framework by a stackoverflow post. A lot of the examples are needlessly verbose and some are even wrong, like the "providerprovider" he rightly mocks.

Well, how about we look into the actual documentation?

https://docs.angularjs.org/guide/providers

What you find is that you define a provider like so:

    myApp.provider('Car' //...

and then, if you want to configure it, you inject it with the suffix 'Provider' into your config function (called when launching the app):

    myApp.config(["CarProvider", function(CarProvider) { //...

So - what's so ridiculous about that?


Next up he still does not seem to understand the difference between Services, Factories and Providers. That's cool and all and might even prompt some to, you know, investigate further and assume that maybe they haven't understood it correctly yet before clicking "must blog now" button. He jumps the shark (straight to ridicule) by making up his very own nonsense:

> Of course we we can configure providers if we need to configure our applications. How else could we configure our applications and make them configurable for our enterprise scale configurable applications.

You know, actually the distinction makes perfect sense. A provider is an object that you know you will want to configure globally before it is being used (data source providers, url routers etc.). A service sits on the other end of the spectrum and is being called freshly right when you need it (more in the direction of the pure objects he has in mind). Factories sit in between those two - they cannot be configured ahead of the application, but allow for more internal functionality instead of just returning an object.

Having the distinction helps people editing the code figure out what they want to do and others, in turn, to understand what the other programmer was up to. Yes, you can use them somewhat interchangeably, but that's life for you: things overlap. How well and strict they are used is up to the programmer.

Now, I'm not bashing plain, custom OOP Javascript - If that's your thing, by all means do it, knock yourself out! But what's with the hating on people who seem to have a different approach? Wouldn't it be healthier to first try to understand where they are coming from? Surely they can't all be totally in love with wasting their time on "ruining it".


In the end what this comes down to is that the author doesn't seem to have sufficient experience with the kinds of environments that make the structures in AngularJS? not just pleasant, but actually a life saver. That's fine - we don't need to all have the same perspective. I just don't get this style of argumentation that boils down to "don't understand, must bash".

He also makes fun (as does one of the SO comments) of this quote from the (old?) angular docs:

> An Angular "service" is a singleton object created by a "service factory". These service factories are functions which, in turn, are created by a "service provider". "The service providers are constructor functions". When instantiated they must contain a property called $get, which holds the service factory function.

You know what? Fuck it. When I tried to understand the Servicy/Factory/Provider distinction, I stumbled upon the same SO post and you know what ended up being the perfect way of understanding it? Forcing myself to go back to that paragraph right there until I understood it.

...

sergiosgc 16 hours ago

link

I think most people here know and understand the patterns involved here: Factory and Provider Model. From this knowledge, I guess the post is easily recognizable as exaggerated. This does not invalidate the gist of his argument. What the post author complains about (and I agree with him) is:

1. These patterns add complexity from the get-go of an app. He ridicularizes, and picks both an extreme example and an extreme Angular solution, but the message is sound.

2. These patterns have failed us in the past. Anyone who has worked on large enterprise projects has dealt with the immense complexity and, surprisingly, with the lack of configurability in the end result (a failure of the pattern objective).

I'd add a last point, not explicitely stated:

3. Last, but definitely not least, Javascript does not need these! Javascript is not a classic object oriented language.

Let me try my hand at supporting my last point. Imagine you want your database connection to be globally configured. You also want code needing a connection to be decoupled from the exact implementation. Just refer to a mythical DBConnection object in the DB client code:

  connection = DBConnection.singleton()

And have it so that the DBConnection object instance exists prior to being called (instantiate it in the configuration stage of the app):

  DBConnection = { prototype: new PgsqlConnection(...) }

Javascript being classless[1] and supporting prototype based object inheritance invalidates many of the assumptions behind patterns in use in classical OO languages. This is even more relevant when we are talking about flawed patterns.

[1] Yes, ECMAScript 2 and beyond have classes. It's an error, never got traction in the real world, and I hope it never does.

reply

ryandrake 15 hours ago

link

Keep adding all these complexity-increasing patterns, and pretty soon, you end up with a class called TransactionAwarePersistenceManagerFactoryProxy?.

http://stackoverflow.com/questions/14636178/unsure-if-i-unde...

reply

..

anon4 17 hours ago

link

An abstraction is not useful, in fact it's downright harmful, if it does exactly the same thing as what it abstracts over, but more verbosely.

A lot of these patterns you're used to seeing in Java stem from Java's static nature.

Here's a car factory in javascript:

  function makeCar(dealerName) { return new Car(dealerName); }

And here's a configured car factory:

  var myCarProvider = makeCar.bind(null, "my dealer");

It makes precisely zero sense to create abstractions that reimplement core language features. That would be like writing a C function to add two uint8_ts, or a macro that expands to inline assembly to convert a float to an int32_t. You already have these things built-in, you don't need a gorillion enterprisey patterns.

reply

amalcon 16 hours ago

link

This of course has nothing to do with staticness, and everything to do with Java's lack of first-class functions. In any flavor of ML, for example, nonzero-argument versions of this pattern are slightly cleaner (the exact pattern is unnecessary due to immutable state).

> or a macro that expands to inline assembly to convert a float to an int32_t

This sort of thing is actually useful on occasion, because it allows you to specify the exception handling you actually want (what to do on inf/-inf/NaN?). Of course, it's better do it with a pure C function, but it's still a "reimplementation of a language feature".

Of course, you only ever do this when you want subtly different behavior than the language provides, so I think your larger point still stands.

reply

haberman 16 hours ago

link

Here's a car factory in javascript [...] And here's a configured car factory

Great, now where is the part where the code that provides the "my dealer" string can be completely unaware that makeCar() exists?

Oh, it doesn't exist? That's the problem that Angular DI is solving.

reply

klmr 16 hours ago

link

Here:

    var factory = makeCar;

The whole point of having first-class functions is that you can assign them. The problem a factory solves is precisely the lack of this assignability (which is why a constructor function is wrapped in a class).

reply

haberman 13 hours ago

link

Please put in the effort to understand at least the basic idea of dependency injection before deriding it in favor of "simpler" solutions that do not solve the same problem.

The specific pattern DI can solve here is:

Your "solution" does not satisfy the requirements because it requires you to manually construct the objects in the correct order. DI on the other hand will automatically analyze the graph of dependencies, invoking the appropriate factories in the right order.

reply

masswerk 12 hours ago

link

Sorry to say, but klmr is right. Since functions are first-class objects and references are evaluated late, these issues are already managed by JS and native scopes. There's no need to recreate this functionality in a framework. You might just want to consider the entry point to your code.

reply

masswerk 11 hours ago

link

P.S.: What this really is about: Unit tests originally designed to go with C/C++ do not work well with late binding. In fact this is a concept totally foreign to these languages. It's essential to understand that these kind of frameworks serve in the first line the purpose of test suites and only in second place the needs of developers. (Developers should not mistake themselves for test suites.)

reply

..

zoomerang 15 hours ago

link

You're completely missing the point and solving the wrong problem. Function or class, its a minor semantic difference for the same thing.

The issue is not how you define your factories, it's how you pass them around and use them in a component that has no knowledge of the outside world.

reply

tomp 15 hours ago

link

> it's how you pass them around and use them in a component that has no knowledge of the outside world

You pass it as a function argument.

    function needs_to_make_a_car(car_maker, road) {
      var car = car_maker()
      car.drive_on(road)
    }

To which you would probably reply: "But that's exactly what FactoryFactory?/DependencyInjection?/... pattern is!"

To which we would reply: "Exactly, which are just fancy words for the re-and-re-discovered concept of functional programming."

reply

haberman 13 hours ago

link

> To which you would probably reply: "But that's exactly what FactoryFactory?/DependencyInjection?/... pattern is!"

Actually no. DI does a lot that your simple examples do not. Notably, it understands the graph of dependencies between decoupled compoments and constructs provided values in the correct order.

reply

adamtj 13 hours ago

link

DI is a design pattern. A design pattern is a conventional way to structure code to solve a particular type of problem. A way to structure code is not a thing that can analyze graphs.

Maybe you're talking about some particular DI framework. That would be a library with functions and classes to make implementing the DI pattern easier. There are many such frameworks, and some of those frameworks may well do automatic graph analysis. But, graph analysis is not inherent in the pattern.

Your specific framework is not the general pattern.

reply

..

haberman 16 hours ago

link

What the actual fuck is this? I read this as "in order to do a hello world, you must first create a hello world service to create the hello world factory to create the hello world sercice so you can print hello world on the screen."

Well, you misunderstood, and now you're ranting according to your misunderstanding.

I'm not an Angular, expert, but I'm pretty sure that the situation is this:

1. "Hello world" does not require any dependency injection whatsoever. (ie. it does not require services, providers, or factories). Note: if this was not true, I would agree with this rant completely.

2. If and when you decide you want dependency injection, if what you want to inject is a simple value, you can just provide it directly, without using a service, provider, or factory.

3. If and when you decide that you want to provide a dynamically-generated value, you can specify a function instead which will get called whenever the value is requested. This is called a "factory."

4. If and when you decide that you want your factory to just "new" an object instead of running an arbitrary function to provide the value, you can register that object type as a "service."

5. If and when you decide that you want to configure that object type first, you can register it as a provider.

If you want to stay at step 1 forever to keep things simple, groovy.

This is a example of success, not failure: you pay for only as much complexity as you actually want to use.

reply

"

-- https://news.ycombinator.com/item?id=7633175

---

" e. For example, it would definitely be horrible if your browser’s scripting lan - guage combined the prototype-based inheritance of Self, a quasi-functional aspect borrowed from LISP, a structured syntax adapted from C, and an aggressively asynchronous I/O model that requires elaborate callback chains that span multiple generations of hard-working Americans. OH NO I’VE? JUST DESCRIBED JAVASCRIPT. W "

"

WWW.usenix.o R g PA ge 7 To Wash It All Away story for interacting with HTML. So, Java lost, despite facts like this: ◆ ◆ JavaScript? is dynamically typed, and its aggressive type co - ercion rules were apparently designed by Monty Python. For example, 12 == “12” because the string is coerced into a num - ber. This is a bit silly, but it kind of makes sense. Now consider the fact that null == undefined . That is completely janky; a reference that points to null is not undefined—IT IS DEFINED AS POINTING TO THE NULL VALUE. And now that you’re warmed up, look at this: “\r\n\t” == false . Here’s why: the browser detects that the two operands have different types, so it converts false to 0 and retries the comparison. The operands still have different types (string and number), so the browser coerces “\r\n\t” into the number 0 , because somehow, a non-zero number of characters is equal to 0 . Voila— 0 equals 0 ! "
MARCH 2014

" Did you know that JavaScript? defines a special NaN? (“not a number”) value? This value is what you get when you do foolish things like parseInt(“BatmanIsNotAnInteger”). In other words, NaN? is a value that is not indicative of a number. However, typeof(NaN?) returns... “number.”

...

NaN? != NaN? , so Aristotle was wrong about that whole “Law of Identity” thing

...

JavaScript? defines two identity operators (=== and !== operators) which don’t perform the type coercion that the stan - dard equality operators do; however, NaN? !== NaN? "

bayle: afaict the three NaN? things are just the IEEE standard way NaN? is supposed to work, not JS's fault, but it's true they're counterintuitive

he makes another point that since you can runtime modify stuff in the base classes (monkey batch), you could, for example, cause all numbers to be 42. This is bad in his opinion (and i think in mine, though i'm not sure. Perhaps if you could override them only in certain cases, within "domains" that you control? is that the way it is already?).

" Much like C, JavaScript? uses semicolons to terminate many kinds of statements. However, in JavaScript?, if you forget a semicolon, the JavaScript? parser can automatically insert semicolons where it thinks that semicolons might ought to possibly maybe go. This sounds really helpful until you realize that semicolons have semantic meaning . You can’t just scatter them around like you’re the Johnny Appleseed of punctua - tion. Automatically inserting semicolons into source code is like mishearing someone over a poor cell-phone connection, and then assuming that each of the dropped words should be replaced with the phrase “your mom.” This is a great way to cre - ate excitement in your interpersonal relationships, but it is not a good way to parse code. Some JavaScript? libraries intention - ally begin with an initial semicolon, to ensure that if the library is appended to another one (e.g., to save HTTP roundtrips during download), the JavaScript? parser will not try to merge the last statement of the first library and the first statement of the second library into some kind of semicolon-riven statement party. Such an initial semicolon is called a “defensive semico - lon.” That is the saddest programming concept that I’ve ever heard, and I am fluent in C++. "

-- https://www.usenix.org/system/files/1403_02-08_mickens.pdf

note to self: i already read the the rest of that link and the only good parts, for this purpose, are above

--

http://discuss.joelonsoftware.com/?joel.3.219431.12

---

bananas 16 hours ago

link

Engineer and architect of large enterprise products here (2 million line Java/C# behemoths that do all sorts of weird and complicated financial shit).

Reality is actually as follows. Not joking I've done this job for 15 years and worked with several large companies including one you mention. Perhaps your experience is in the 1% of "enterprise" companies who have clue but this by far is the majority:

Bold statement here: 99% of the use cases of all these patterns are totally pointless and a waste of money and time. The add nothing to the product, they increase complexity, decrease performance. It's cheaper and more reliable to chop your product up and write each chunk in whatever separately with no official architectural pattern usage system-wide.

Typically in the real world, you're going to end up with:

1. Literally tonnes of DI/IoC? code and configuration for an application with an entirely static configuration. This is either in XML or DSL form using builder patterns. Consumes 30 seconds or more to start the application up every time. Aggregate over 50 developers is 16 hours a day pissed out of the window.

2. Proxies for anaemic domain models that are fully exposed. Consumes 30 seconds to generate proxies that do sod all. Aggregate over 50 developers is 16 hours a day pissed out of the window.

3. Leaky broken abstractions everywhere actually destroying the entire point of the abstractions. Makes refactoring impossible and maintenance hell. It's better in some cases that they are not even there and that basic savvy is used over COTS patterns.

4. Acres of copy pasta. Why bother to write a generic implementation when you can copy the interface 50 times and change the types by hand?

5. Patternitis. So we need to use the BUILDER pattern to write the query to connect to the DOMAIN REPOSITORY for the AGGREGATE to call the CODE GENERATOR that fires up the ADAPTER to generate the SQL using the VISITOR PATTERN which farts out some SQL to the database PROVIDER (and a 90 layer stack dump when it shits itself). This is inevitably used in one small corner while everyone in the rest of the system hits a wall in the abstraction and says "fuck it" and writes SQL straight up and skips the layers of pain and because there are no code reviews (because people are all sitting there waiting for their containers to start up whilst posting cats on Facebook).

LINE 10 (remember this)

These things are never rewritten or refactored. They slowly evolve into a behemoth ball of mud which collapses under its own weight to the detriment of customers, a team of enterprise architects (usually from ThoughtWorks? etc) usually appear at this point then attempt to sell outsourcing services who will "fix all the shit" for a tiny fee, leave a 200 page powerpoint (with black pages just to fuck your printer up). The company struggles on for a few years and is saved at the last minute by a buy out by a company at an early stage of the cycle who slowly port all the customers and buggy shit to their product platform. Then the team either dissolve and knowledge is lost or they take a cash sum from the sale and start up some ball of shit company that does the same again.

GOTO 10.

That's enterprise 101 because the people that have been hit by the clue sticks know better than to subject themselves to this and do work in SF and SV and London. Me: I'm a masochistic crazy person who wonders every day why I don't just go and write web sites for half the cash or flip burgers because it's less painful.

reply

sgt 14 hours ago

link

When done extremely poorly - enterprise projects can indeed end up as slow behemonths. I agree that this is often the case, but it doesn't mean that the design patterns backing powerful enterprise solutions are to be taken lightly.

Adapter pattern does have a huge, proxies do have legitimate uses, visitor pattern can be made to be extremely powerful. Ofcourse if you have a development sweatshop throwing out these left and right - yes then you will have an abstract mud ball that is impossible to maintain - and in the end defeats the original purpose of these patterns.

reply

...

mattmanser 16 hours ago

link

You mentioning Java actually reminds me, what I find bizarre about the recent trend in javascript is that they're creating the boiler plate on purpose. In Java the boilerplate was a creation of the language, but in javascript it's a creation of the community! It's like some sort of sick joke.

The stuff I hate most is the boilerplate that means you find crappy little singletons everywhere when all that was needed was a function. And for the love of god, a function that's named and not pointlessly assigned by a var because you're mono-linguistic and haven't realised yet what the actual point of functions are yet.

Huge chunks of the javascript in the web site I've taken over were done by someone who loved these patterns, this is an example of his favourite one, pointless and led to colossal code bloat as he used it over and over for basic handlers.

    function instantSearch() {
        var instantSearchObject = {
            config: {
                animSpeed: 250
            },
            init: function () {
                this.findComponents();
                this.bindEvents();
            },
            findComponents: function () {
                this.searchBlockCollection = $('.search, .entry-search');
                this.searchInputCollection = this.searchBlockCollection.find('input:text');
            },
            bindEvents: function () {
                var self = this;
                this.searchInputCollection.on({
                    'focus': function () {
                        var currentInput = $(this),
                            currentSearchSuggestionsBlock = currentInput.parent().siblings('.search-suggestions');
                        currentSearchSuggestionsBlock.slideDown(self.config.animSpeed);
                    },
                    'blur': function () {
                        var currentInput = $(this),
                            currentSearchSuggestionsBlock = currentInput.parent().siblings('.search-suggestions');
                        currentSearchSuggestionsBlock.slideUp(self.config.animSpeed);
                    }
                });
            }
        }.init();
    }

Nigh on 30 lines of code, so few lines that actually do something. I tend to refactor them when I can so I can see what they're actually doing, the following code should do exactly the same thing (without me testing it):

    function bindSearchBox() {
        $(".search input, .entry-search input")
            .on("focus", function () {
                $(this).siblings('.search-suggestions').slideUp(250);
            })
            .on("blur", function () {
                $(this).siblings('.search-suggestions').slideDown(250);
            });
    }

Hmmm, even just refactoring that tiny bit of code shows that there might even be a bug there, or at least pointless code, I'm not sure why he's sliding the search results up on focus, they don't stay open.

masswerk 12 hours ago

link

What I personally really do enjoy is "var self = this;". Now you just overwrote the system variable pointing to the global object. What was this good for? Oh, the other language doesn't use "self" as a system variable, so it must be worthless in JS? Hmm.

Now we just have to define "var global = (function() {return this;}).apply(null);" and we have a reference pointing to the global object! Pure magic! Really? Are you serious? Sorry to hear so. (No, I was not suggesting to use some kind of more elaborate pattern for this. See, there is "self" ...)

(When ever you see "var self = this;", take your things and run.)

reply

mattmanser 9 hours ago

link

Err, that's totally ok. That's useful for keeping a reference to the object scope in a closure. You can't really write advanced javascript without it.

AFAIK in all the c-style languages `this` is the usual self reference. Python & Ruby are self. VB.Net is Me.

But javascript screwed `this` up and this is especially apparent when you use events where `this` ends up being the caller instead of the callee. Having a self reference in the closure allows you to fix the problem.

I think people ended up using self for a reason, it's strange enough in a C-style language that you're not expecting it to be anything, but familiar enough to be obvious. The other common ones over the years have been `_this` & `that`.

There are patterns in javascript and patterns. We need Crockford to write, "Javascript Patterns: The Good Ones".

reply

..

im3w1l 17 hours ago

link

I think this really hits the nail on the head. When you start out with an app, you just want to say

func: print "Hello World"

and be done with it. But when you grow, you realize that you need it to be able to print hello world in many languages so you add I18N

func: unicodePrint(geti18N(HELLO_WORLD))

getI18n(I18N_KEY): curlang.get(I18N_KEY)

1091_en: "Hello World

and then someone accidentally broke something so you want to be able to unit test it.

func(printer=stdoutprinter): printer.unicodePrint(geti18N(HELLO_WORLD))

getI18n(I18N_KEY): curlangmap.get(I18N_KEY)

1091_en: "Hello World

interface printer ...

singleton stdoutprinter ...

Ideally you'd want a framework that caters to all of these stages

reply

dev360 15 hours ago

link

It feels like nobody is addressing WHY angular uses DI/IoC?.. In something like Java or .NET, you use it to a large extent because it separates code nicely and makes it easier to test.

In JS, one big problems is that you want your code to live in different files, and to do this, you either create your own ghetto object namespaces that hold your classes, or use requirejs.

In the case of Angular, they decided to handle this for you, so they provided IoC?/DI as a means to accomplish this. They are merely doing this to make up for things that are missing in javascript as a language.

In their defense, it actually works kind of nice compared to the other two approaches imho.

reply

--

"

We hoped that the PyPy? JIT Python interpreter might save us. Unfortunately, our ARM architecture does not support the floating point instruction set PyPy? requires. "

--

there should be a less confusing way to do this (to throw out the indices on two hashes, turning them into lists, and then concatenate the lists):

x = [{0: 'a', 1: 'b'}, {2 : 'c', 3: 'd'}] list(itertools.chain(*[h.values() for h in x ])) == ['a', 'b', 'c', 'd']

(one thing is that if y is a list of lists, then the way to concatenate them is list(itertools.chain(*x)), which is confusing to remember or to read)

---

maybe need to have 'modifiers' as non-meta-level annotations, e.g. ways to modify nouns with adjectives and verbs with adverbs

---

this is gold:

http://www.toptal.com/python/top-10-mistakes-that-python-programmers-make

--

a plaintext db, in a similar style to todo.txt, but capable of almost everything BerkeleyDB can do

---

python has flat, namespaces, explicit errors, reify everything, get/set override, 2 composite data structs, fns, oop, what does jasper have? much of the same except flat, plus graphs minus map,lists. what else?

laziness? functionality (no implicit state)? partial function application? is everything an object? __apply__?

---

in Python, useful idiom: to have a function have optional features which take a configuration parameter and which are disabled if the parameter is None

--

http://www.dwheeler.com/innovation/innovation.html

--

" Nowadays I can't imagine a lot of people discounting Python because of the print statement, unicode support, division rules, or lack of yield from statement. " -- https://news.ycombinator.com/item?id=7802095

"nonlocal, asyncio, unicode, yield from. If you ask me, these four are very compelling reasons to switch to Python 3 and they solve very real problems." -- https://news.ycombinator.com/item?id=7801834

---

SoftwareMaven? 16 hours ago

link

other than proper functional programming, maybe, which isn't about to happen in Python

If we could just have tail call optimization, I think we could make the rest work (well, maybe better lambdas, too).

reply

irahul 8 hours ago

link

> If we could just have tail call optimization, I think we could make the rest work

Manually thunk/trampoline it?

    class trampoline(object):
        def __init__(self, fn):
            self.fn = fn
        def __call__(self, *args, **kwargs):
            ret = self.fn(*args, **kwargs)
            while isinstance(ret, thunk):
                ret = ret()
            return ret
    class thunk(object):
        def __init__(self, fn, *args, **kwargs):
            self.__dict__.update(fn=fn, args=args, kwargs=kwargs)
        def __call__(self):
            if isinstance(self.fn, trampoline):
                return self.fn.fn(*self.args, **self.kwargs)
            else:
                return self.fn(*self.args, **self.kwargs)
    @trampoline
    def fact(n, accum=1):
        if n <= 1:
            return accum
        else:
            return thunk(fact, n-1, n*accum)
    print fact(1000)
    @trampoline
    def is_even(n):
        if n == 0:
            return True
        else:
            return thunk(is_odd, n - 1)
    @trampoline
    def is_odd(n):
        if n == 0:
            return False
        else:
            return thunk(is_even, n - 1)
    print is_even(1000001)

You only have to write thunk/trampoline utility once, and it is similar to how we do it in clojure.

reply

--

" Rip closures out of Go, and I'd miss that. Rip out the goroutines and I'd miss not having something like generators or something, indeed, the language would nearly be useless to me. But I certainly don't miss metaclasses or decorators or properties. I will cop to missing pervasive protocols for the built-in types, though; I wish I could get at the [] operator for maps, or implement a truly-transparent slice object. But that's about it.

[1]: http://www.joelonsoftware.com/articles/APIWar.html

reply

nostrademons 12 hours ago

link

My experience was Go was about 2-3x slower (development speed) than Python for prototyping. Web programming, though, which is a particular strength of Python and a particular weakness of Go. YMMV, of course.

I actually really do miss list comprehensions and properties and pervasive protocols for built-in types and really concise keyword/literal syntax. I don't miss metaclasses, and I only vaguely miss decorators. (Go has struct annotations and compiler-available-as-a-library, which in some ways are better.) Properties in particular are really useful for the evolvability of code; otherwise you have to overdesign up front to avoid a big rewrite as you switch from straight struct fields to accessors.

..

rwallace 13 hours ago

link

I found to my surprise that these days Java is as productive as anything else. It's not like the early days anymore; modern Java has generics, reflection, now lambda; there would still be more finger typing than I'd ideally like, but the IDEs are more than good enough to make up for that, to the point that a nontrivial program now usually takes fewer keystrokes to write in Java than almost any other language.

Which of course reinforces your main point: nowadays the language differences just aren't that big.

reply

...

jerf 5 hours ago

link

It's easy to lose track of this in the torrent of "PyPy? Benchmarks Show It Sped Up 2x!!!!" and "Latest Javascript Engine 50% Faster Than The Last One!!!! OMG!! Node!!!!", but in absolute terms, the dynamic language JITs are still quite slow. It's an urban legend that they are anywhere near compiled performance. Except LuaJIT?. "Faster Python" is still slow. Unless you're in a tight loop adding numbers together, but I'm not.

Moreover, a plot of their performance over time rather strongly suggests that they've plateaued. Personally, barring any major JIT advances, I'm considering the book closed on the topic of whether JITs can take a dynamic language and make it as fast as C or C++: No.

(Recall that a Java JIT is another issue; it starts with the static typing of Java, and then JITs from there. It has a much larger array of much more powerful guarantees to work with, that the dynamic languages simply don't. Go will be running at straight-up C speeds (which it does not currently) long before Python will.)

reply

...

"

--

" Compare these two programs … which is more human-readable?

First, in C++:

  1. include <iostream.h> void main() { cout << "Hello world" << endl; }

Now, in Python (v.2.7):

print "Hello world" "

" Analyzing 30 programs written in Java and 30 written in Python by novice programmers (in Finland, aged 16–19), Mannila et al. (2006) studied the errors found in them to identify those that could be attributed to the language. The students in both groups had the same teacher and studied the same contents in the same environment; only the language changed.

The study categorized errors as relating to understanding (logic) or arising from features of the language (syntax). Four criteria were also applied to the programs as a whole: execution, satisfying specs, error handling, and structure.

Of all the syntax errors found, only two appeared in Python programs while 19 were found in the Java programs (missing brackets or semicolons, uninitialized variables, etc.). And the logic errors were also significantly fewer in the Python programs, compared to Java (17 to 40). Also, more Python programs ran correctly and satisfied the specifications, and more included error checking/handling, compared to the Java programs. "

" Studying how people use a natural language to describe a task to another human gives clues. In such descriptions, people don’t define iterations, they instead put into words set operations; they are not explicit about loops terminating; people use constraints, event-driven tasks and imperative programming, but they never talk about objects. And when these natural-language instructions are given to other participants, they have no problem following them. Processing a set of data until it's finished is natural, but incrementing an index is not.

How is this related to Python? It so happens that the language's core looping idioms can often replace index manipulation, making it more like plain English. The following examples were given by Raymond Hettinger (core Python developer) in a keynote in 2013. To get the square of the numbers from 0 to 5, you might write Python code like this:

for i in [0, 1, 2, 3, 4, 5]:

print i2

But the convenient function range() makes it easy to iterate over longer lists.

for i in range(6):

print i2

This has the disadvantage of creating the list in memory (not good if the list is very big). So in Python 2.7, a better way is with the xrange() function (which in Python 3 dropped the x):

for i in xrange(6):

print i2

Now, suppose you want to print the colors in a list like this:

colors = ['red', 'green', 'blue', 'yellow']

You might write this to loop over all the colors:

for i in range(len(colors)):

print colors[i]

But Python let's you do this instead, which looks more natural, and like Raymond says, more beautiful:

for color in colors:

print color

In summary, Python is a lot more like English than other programming languages, and reduces the cognitive load in learning to think computationally. "

" Matlab has bad string manipulation Indexing: Python indexing goes as it does in C … starting from 0 + Python indexing is done using brackets, so you can see the difference between an indexing operation and a function call. "

-- http://lorenabarba.com/blog/why-i-push-for-python/

---

[2]

--

http://prog21.dadgum.com/73.html

--

http://prog21.dadgum.com/4.html

--

http://prog21.dadgum.com/5.html

--

https://www.goodreads.com/author/quotes/73409.Brian_Goetz

"

“Just as it is a good practice to make all fields private unless they need greater visibility, it is a good practice to make all fields final unless they need to be mutable.” ― Brian Goetz, Java Concurrency in Practice

“Immutable objects are simple. They can only be in one state, which is carefully controlled by the constructor. One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial.

Immutable objects are also safer. Passing a mutable object to untrusted code, or otherwise publishing it where untrusted code could find it, is dangerous — the untrusted code might modify its state, or, worse, retain a reference to it and modify its state later from another thread. On the other hand, immutable objects cannot be subverted in this manner by malicious or buggy code, so they are safe to share and publish freely without the need to make defensive copies.” ― Brian Goetz, Java Concurrency in Practice

“Whenever more than one thread accesses a given state variable, and one of them might write to it, they all must coordinate their access to it using synchronization.” ― Brian Goetz, Java Concurrency in Practice

“Sometimes abstraction and encapsulation are at odds with performance — although not nearly as often as many developers believe — but it is always a good practice first to make your code right, and then make it fast.” ― Brian Goetz, Java Concurrency in Practice

“The possibility of incorrect results in the presence of unlucky timing is so important in concurrent programming that it has a name: a race condition. A race condition occurs when the correctness of a computation depends on the relative timing or interleaving of multiple threads by the runtime; in other words, when getting the right answer relies on lucky timing.” ― Brian Goetz, Java Concurrency in Practice

“Compound actions on shared state, such as incrementing a hit counter (read-modify-write) or lazy initialization (check-then-act), must be made atomic to avoid race conditions. Holding a lock for the entire duration of a compound action can make that compound action atomic. However, just wrapping the compound action with a synchronized block is not sufficient; if synchronization is used to coordinate access to a variable, it is needed everywhere that variable is accessed. Further, when using locks to coordinate access to a variable, the same lock must be used wherever that variable is accessed.” ― Brian Goetz, Java Concurrency in Practice

“When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations. Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread.” ― Brian Goetz, Java Concurrency in Practice

“Once an object escapes, you have to assume that another class or thread may, maliciously or carelessly, misuse it. This is a compelling reason to use encapsulation: it makes it practical to analyze programs for correctness and harder to violate design constraints accidentally.” ― Brian Goetz, Java Concurrency in Practice

“Accessing shared, mutable data requires using synchronization; one way to avoid this requirement is to not share. If data is only accessed from a single thread, no synchronization is needed. This technique, thread confinement, is one of the simplest ways to achieve thread safety. When an object is confined to a thread, such usage is automatically thread-safe even if the confined object itself is not.” ― Brian Goetz, Java Concurrency in Practice

“Locking can guarantee both visibility and atomicity; volatile variables can only guarantee visibility.” ― Brian Goetz, Java Concurrency in Practice

"

--

https://medium.com/programming-ideas-tutorial-and-experience/433852f4b4d1

--

" A framework with methods like this:

    NSInteger myCount = [[myDict objectForKey:@"count"] integerValue];
    NSArray *myArray = [myString componentsSeparatedByString:@","];
    myItem = [myArray objectAtIndex:i];

is just not going to fly in a language that (hypothetically) supports this:

    myCount = myDict["count"];
    myArray = myString.split(",");
    myItem  = myArray[i];

" -- http://arstechnica.com/apple/2010/06/copland-2010-revisited/2/

--

"

Like dealing with ARC, which is still clunky:

    @lazy var asHTML: () -> String = {
        [unowned self] in
        if let text = self.text {
            return "<\(self.name)>\(text)</\(self.name)>"
        } else {
            return "<\(self.name) />"
        }
    }

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. https://itun.es/us/jEUH0.l

" -- https://news.ycombinator.com/item?id=7836600

--

on swift:

--

Pxtl 17 hours ago

link

Aren't the interfaces-as-generics constraints also from c#? It seems to be c# more than anything else.

reply

rayiner 15 hours ago

link

Type parameters constrained by sets of method signatures dates to 1977: http://en.m.wikipedia.org/wiki/CLU_(programming_language). See also, Theta and some of Todd Millstien's work in the 1990s.

reply

--

CmonDev? 10 hours ago

link

"Protocols get to play double duty as either concrete types (in which case they denote a reference type you can acquire with as from any supporting type) and as type constraints on type parameters in generic code. This is a delightful convenience Rust stumbled into when designing its trait system and I'm glad to see other languages picking it up. I'm sure it has precedent elsewhere." - C# again.

--

one commonality between hypercard and scratch and second life is a focus on different objects, corresponding to visible objects in the rendered world or in the GUI, which contain event-triggered (or message-triggered) methods

--