Mac OS Dock disappears… what to do?

Periodically, the dock at the bottom of my MacOS screen just… vanishes.
Unclear what causes this.

All the windows still work, but without the dock navigating between windows is a hassle.

Previously I had resorted to a reboot to correct this.
Just found this tip.

In short:

  1. In a terminal window:
    killall Dock
  2. Command-Option-D

The dock is back.

NodeJS – radical advance uses JavaScript on the Server!

I have heard a couple people in the past several weeks comment on how novel or noteworthy NodeJS is, for allowing devs to employ their beloved JavaScript language on the server!! Can you believe that? How awesome is it that we can write JavaScript on the server?!

Uh, folks, I’m here to tell ya, that ain’t new or novel.

By my reckoning, the first company to support JavaScript on the server side was named Netscape, and around 1995 or 96, they released Netscape Live Server, or something like that, which had a JavaScript programming model for web servers. Brendan Eich invented JavaScript in a rush-rush job, 10 days in May, 1995, for use in the Netscape browser. But Netscape also released a server platform that they tried to monetize, and it had a JavaScript-powered extension mechanism.

Not long after that, Microsoft released the NT Option Pack. I think this was in mid-1996. The Option Pack was a free add-on for Windows NT 4.0. An “out of band release”, they called it then. The NTOP included the first edition of ASP, what we now call “ASP Classic”, which allowed developers to use VBScript or JavaScript (Microsoft called it JScript then) to dynamically generate web content. (See The ABCs of Active Server Pages from 1997, content still on MSDN!).

ASP later became part of Windows NT, and in subsequent service packs and in Windows 2000, IIS + ASP was just “in there”. Prior to the NTOP, Windows NT 4.0 included IIS, but IIS was just a static web page server and CGI server. (I think). ASP was something a little different. By the way, ASP is still supported in IIS, yea verily, after yea these many years, 17 years by my count. So… to all those poeple who claim that Microsoft doesn’t support technologies long enough, ASP is counterexample #1.

Yes, I am not making this up: today you can install Windows Server 2008 or whatever the latest server is, enable ASP, and then deploy your JavaScript ASP code and begin serving web apps with it. It just works.

Classic ASP works for any type of content – so you can serve HTML markup of course, but also XML, or CSS (dynamically generated CSS? Maybe…) and of course client-side Javascript. This would be server-side Javascript potentially dynamically generating then emitting client-side JavaScript and yes, it is possible. Dynamic images, plaintext, of course any mime type, any content. What about REST APIs? Yes, you can use Classic ASP and JavaScript running on a Windows Server to implement a REST API, which is a use-case that is very common for NodeJS these days. Classic ASP can do content negotiation, can inspect and set cookies, can do anything a web platform can do. This also just works.

In practice, “Classic ASP” code often resembles what I will call “Rubbish PHP” code, which is to say, it mixes markup with code, willynilly, there’s poor templating, poor code re-use, poor use of classes and generally things are just an unmaintainable mess. But that is by no means required by either Classic ASP or PHP. That is just an unpleasant side effect of being really easy to use, which means novice programmers use it. Both Classic ASP and PHP have that quality. It is possible to author nicely architected ASP code, and PHP code.

Nicely architected? If I may be so bold, here’s an example.

        
        
      

In any case, NodeJS is by no means the first JavaScript-on-the-server runtime environment.

One thing that makes NodeJS different is asynchrony. Classic ASP did not support asynchrony very well.

On the other hand, NodeJS claims to be “super fast” or whatever. And from my point of view, Classic ASP running JavaScript was pretty fast dating back to 2002. Even in Windows 2000 Server, the Script host compiled the JavaScript and cached the compilation unit, so the first time you ran a page you’d incur the compile cost but thereafter all other requests executed the pre-compiled cached result. Which meant much better performance. And starting with Windows Server 2003, you could use the kernel-level cache in Windows Server, and lots of other nifty features. ASP really performed quite well. The performance of ASPNET on similar workloads is measurably better, but not by orders of magnitude. I would bet that Classic ASP with JavaScript gives NodeJS a run for its money on performance, even today.

As far as I know, Classic ASP has not been enhanced to use the new Chakra JavaScript engine. Chakra was built for IE9, the v8 competitor, and is significantly faster than the older Microsoft Script Host, about twice the throughput in the workloads I tested. I suppose that Microsoft did not see a customer benefit for using Chakra under Classic ASP. Classic ASP people aren’t clamoring for more performance out of their Classic ASP apps; at this point, they just want stability.

Another thing that sets NodeJS apart now, is the active community and the sharing. the npm is a beautiful thing, and while there were many many ASP Classic developers back in the day, there was no such thing as npm and they resorted to newsgroups and forums to share code. Primitive approaches compared to what we do now, for sure.

You are not an idiot if you use NodeJS

Just saw this. A Youtube rant by Brandon Wirtz clarifying for us all how stupid NodeJS is. From January 2012.

Most excellent. Well played. Even the title is deliciously Colbert-esque: “Node.JS Is Stupid And If You Use It So Are You”. Highly Recommended if you like nerd rants, and I mean that in the best way.

Even so, I disagree. Mostly I don’t disagree with the specific points Mr Wirtz makes, but I do disagree that those points are important, and I also disagree with his conclusion. Be assured: You are not an idiot if you use NodeJS.

Mr Wirtz’s main point is that NodeJS is based on JavaScript, which historically has been slow. Very slow, when compared to C/C++, pretty slow when compared to a modern Java or C#. JavaScript running in v8, which is what you get with NodeJS, is Fast fast FAST compared to JavaScript from 1997 running in an interpreter. But even so it is still has been slow compared to other options like C and Java.

So, why is this not a disqualifier?

First, lately, v8 micro-benchmarks (example 1, example 2) are showing that JavaScript can match the performance of C or Java, or even beat the performance of Java. I don’t put much stock in micro-benchmarks, because they are too simplistic and the performance they measure is often not indicative of real world scenarios. But even so, these benchmarks seem to show that the performance of JavaScript running on v8 is at least comparable. JS may not be faster than Java, but at least it is not orders of magnitude slower.

But more importantly, the performance of JavaScript is not a disqualifier for NodeJS because writing JavaScript is fun. People like it. People like coding in JavaScript, obviously, and many people are attracted to the idea of one-language-everywhere. Not in the Write-Once-Run-Anywhere (WORA) sense of Java circa 1997, but in the learn-once-write-for-anywhere sense. Just about everyone learns JavaScript. (though many of us learn it poorly. Douglas Crockford has said in his “Good Parts” lectures, that the people who know curly-brace languages like Java and C often begin to use JavaScript without actually learning it, because it’s forgiving, and they can actually limp through. In general, people don’t learn JavaScript very rigorously. As an example of this, not very long ago a respected colleague observed that JavaScript is “Not OO”, and many people have told me JavaScript, the language, is never JIT compiled.)

The obvious comparison is Java, which is a big pain to write. Building something in Java often requires very verbose code, lots of boilerplate, class structure and explicit interface implementation, and so on. The polar opposite of JavaScript.

To my mind, the popularity of NodeJS is similar to the popularity of PHP. People like both of these flawed environments because they mostly work and they are easy to use. NodeJS may be slow, or it may not be. But that is sort of irrelevant. Slow compared to what? It is theoretically slow, but practically speaking, it’s plenty fast for most applications. It’s faster than PHP or Ruby. Also, we have an embarrassment of Gigahertz riches and if we choose to frivolously spend CPU cycles on JavaScript engines, so what?

Also, unlike Mr Wirtz, I don’t care about whether the marketing around NodeJS is accurate or not. They say it’s really fast, and maybe that’s not generally true. But I don’t worry about that. The bottom line is: people like NodeJS because it’s nice and easy and friendly and familiar, and fun damnit, and the performance is plenty good enough.

That doesn’t mean that JavaScript or NodeJS is the answer to all programming questions. It isn’t. For large projects I’d be concerned about type integrity (though we can try using Microsoft’s TypeScript for that), or the hierarchy of modules. I think npm kindof just punted on that. Also the explosion of modules in npm makes it clear that NodeJS is a technology that is not quite stable. Lots of things are changing, very rapidly. There’s good and bad there, but surely the relative safety and stability of Java v6 offers a stark comparison. So NodeJS is not the answer to every problem, but it is a good answer to many programming problems. You’re not stupid if you use NodeJS.

If I were advising a young aspiring programmer today, I’d tell them to learn JavaScript and Go.

Pet peeve – NodeJS people, What’s with “err”?

This is a pet peeve of mine.

I think the code is the documentation, and names of variables, functions, and classes used in the code are important. These names communicate the purpose of the things being named.

It makes sense that Node’s package manager is called “npm” because that denotes “node package manager.” Easy to decipher, and if you know what a package manager is, then you know what npm is. No need to think about it.

The names of variables in a program implemented in any particular language are also important. i and j indicate loop indices. Variables like req and res might indicate a request and a response object, respectively, though in many cases I would prefer to just see request and response.

Ok

  var http = require('http'),
      server = http.createServer();

  function handleRequest(req, res) {
    res.writeHead(200, { 'content-type': 'text/plain'});
    res.write('Hello, World!');
    res.end();
  }

  server.on('request', handleRequest);

  server.listen(8080);

Better

  var http = require('http'),
      server = http.createServer();

  function handleRequest(request, response) {
    response.writeHead(200, { 'content-type': 'text/plain'});
    response.write('Hello, World!');
    response.end();
  }

  server.on('request', handleRequest);

  server.listen(8080);

Or, if you look at Java, something like an exception is usually named with an e, or an exc, sometimes followed by a numeric suffix. Like so:

    try {
        arf me = new arf(args);
        me.Run();
    }
    catch (java.lang.Exception exc1) {
        System.out.println("Exception:" + exc1.toString());
        exc1.printStackTrace();
    }

It makes sense.

To err is human.

Which brings me to the point. There are numerous examples showing how to code a callback function in Node, that look like this:

  fs.readFile('/etc/passwd', function (err, results) {
    if (err) {
      handleError(err);
      return;
    }
    console.log('File contents:', results);
  });

As you can see, the variable that holds the error is called err. Now if English is one of the languages you use, which is probably true if you are reading this, then you probably know that “error” is a noun and “err” is a verb.

Using the verb form of that word as a “shorthand” for the noun is confusing to code reviewers, and therefore wrong. Yes, everyone is assumed to know that “err” really refers to an error, but everyone incurs a small but not negligible mental burden in reconciling that difference, every time they look at that variable. Why? To save 2 characters? If really we are economizing on variable name length, and I respect efforts to do so, then why not just use the letter e, which is often used to represent errors or exceptions?

Either e or error is preferred over err.

An error does not become truth by reason of multiplied propagation, nor does truth become error because nobody sees it. – Mahatma Gandhi

Pretty printing XML from within emacs

I use emacs. Can’t help it. Been using it for years, and the cost of switching to something “more modern” has never reached the payoff threshold.

Today I want to show you how I pretty-print XML from within emacs.

The elisp for the pretty-printing logic was originally from
a stackoverflow answer. I modified it slightly and post it here:

        
        
      

Thanks to isagalaev for highlight.js.

emacs dired fixups

For the geeks and dinosaurs among us**, A quick post on dired fixups I use for emacs. I like to sort by name, size, date, and extension. Dired sorts by name and time I think, by default. For other options you need to manually enter the ls sorting options. This elisp teaches dired to sort by size and extension as well.


        
        
      

In action:

Thanks to isagalaev and jamesward for highlight.js and github-files.js respectively.

**Someone today told me that emacs is a throwback and I am a dinosaur for continuing to use it. He said he used emacs in the past, but now he uses “more modern” tools. I’m a little suspicious because his first excuse for not using emacs was, “I run Windows”.

I hate Node.js

Node.js is cool, so they say. Everybody’s doing it, so I hear.

I hate it. Node.js has wasted so much of my time, I don’t think I will ever forgive it. But take that with a grain of salt. I don’t use Node.js for its intended purpose, which is server-side Javascript.

What I want is a scripting environment for automation on the local machine. In this particular case, I want to script an FTP session from my bash prompt, I want to script a directory sync. I know I can do this with bash scripting, but I already know Javascript syntax and semantics, so I’d like to use what I know. It’s FTP in this case, but in general I want to be able to script arbitrary things on the local machine.

By the way: I can do this on Windows. I can run Javascript programs from the CMD shell, to script an FTP session, automate filesystem operations, launch applications, and a bunch of other things. It’s really nice. There’s a whole catalog of COM objects that can be scripted, including obscure stuff like the fax system in Windows, or more mainstream things like IE settings (like proxies), or IIS administration. Tons of things.

Of course I could do similar things with Node.js on MacOS, too. The problem is, the Node.js model is designed for server use. Everything is asynchronous. When I retrieve a set of files from an FTP server, and I want to iterate and retrieve each file, I have to do that asynchronously. But I don’t want it to be asynchronous. Writing asynchronous code lets me get really good throughput on a server. Writing asynchronous code for my purpose just obscures the goal of the code. I want to do this:

    fileList.forEach(function(item) {
        var modTime = ftp.getModTime(item);
        if (modTime > lastUpdate) {
            ftp.retrieveFile(item);
        }
    });

But I can’t do that. No sir. No I cannot. I’m using Node.js. And because of that, I need this:


    var c = 0, L = flist.length, 
        checkIfDone = function() {
            c++;
            if (c == L) { next(); }
        };

    fileList.forEach(function(item, ix) {
        var localPath = (dir == '.') ? item : Path.join(dir, item),
            stat, localMtime = ...,
            remoteMtime = 0;

        // get modification time of the remote file
        sync.ftp.raw.mdtm(localPath, function (e, res) {
            var d, tm;
            if (e) { }
            else {
                remoteMtime = new Date(res);
                if (localMtime == 0 || (remoteMtime > localMtime)) {
                    // Retrieve the file using async streams
                    sync.ftp.getGetSocket(localPath, function(e, sock) {
                        if (e) return console.error(e);
                        var fd = fs.openSync(localPath, "w+");

                        // `sock` is a stream. attach events to it.
                        sock.on("data", function(p) {
                            fs.writeSync(fd, p, 0, p.length, null);
                        });
                        sock.on("close", function(e) {
                            if (e) return console.error(new Error("error"));
                            fs.closeSync(fd);
                            checkIfDone();
                        });

                        // The sock stream is paused. Call
                        // resume() on it to start reading.
                        sock.resume();
                    });
                }
                else {
                    checkIfDone();
                }
            }
        });
    });

Can anyone ELSE see why I’d rather not use an asynchronous-optimized, server-oriented programming environment to script my desktop?

It’s not Javascript that’s the problem here. I can grab the v8 Javascript engine and use it to run JS code. That works. The problem is that there are no JS libraries for v8. There’s no FTP library. There’s no “any” library. The only FTP libraries I’ve found are Node-compliant. To use it, I have to do Node.js things. Likewise for all the other purposes. There’s no package manager for JS libraries, except for Node.js.

I think the solution here is for me to learn to use Bash scripting and use CURL. But that’s a lame solution. It would be nice if MacOS supported Javascript for client-side use, as nicely as Windows does. Because I hate Node.js.

The way Azure should have done it – A better Synonyms Service

This is a followup from my previous post, in which I critiqued the simple Synonyms Service available on the Azure Datamarket.

To repeat, the existing URI structure for the service is like this:

GET https://api.datamarket.azure.com/Bing/Synonyms/GetSynonyms?Query=%27idiotic%27

How would I do things differently?

The hostname is just fine – there’s nothing wrong with that. So let’s focus on the URI path and the other parts.

GET /Bing/Synonyms/GetSynonyms?Query=%27idiotic%27

Here’s what I would do differently.

  1. Simplify. The URI structure should be simpler. Eliminate Bing and GetSynonyms from the URI path, as they are completely extraneous. Simplify the query parameter. Eliminate the url-encoded quotes when they are not necessary. Result: GET /Synonyms?w=improved
  2. Add some allowance for versioning. GET /v1/Synonyms?w=positive
  3. Allow the caller to specify the API Key in the URI. (Eliminate the distorted use of HTTP Basic Auth to pass this information). GET /v1/Synonyms?w=easy&key=0011EEBB4477

What this gets you, as an API provider:

  1. This approach allows users to try the API from a browser or console without registering. The service could allow 3 requests per minute, or up to 30 requests per day, for keyless access. Allowing low-cost or no-cost exploration is critical for adoption.
  2. The query is as simple as necessary and no simpler. There is no extraneous Bing or GetSynonyms or anything else. It’s very clear from the URI structure what is being requested. It’s “real” REST.

What about multi-word queries? Easy: just URL-encode the space.
GET /v1/Synonyms?w=Jennifer%20Lopez&key=0011EEBB4477

There’s no need to add in url-encoded quotes for every query, in order to satisfy the 20% case where the query involves more than one word. In fact I don’t think multi-word would even be 20%. Maybe more like 5%.

For extra credit, do a basic content negotiation that looks at the incoming Accepts header and modifies the format of the result based on that header. As an alternative, you could include a suffix in the URI path, to indicate the desired output data format, as Twitter and the other big guys do it:

GET /v1/Synonyms.xml?w=adaptive&key=0011EEBB4477

GET /v1/Synonyms.json?w=adaptive&key=0011EEBB4477

As an API provider, conforming to a “pragmatic REST” approach means you will deliver an API that is immediately familiar to developers regardless of the platform they use to submit requests. That means you have a better chance to establish a relationship with those developers, and a better chance to deepen that relationship.

That’s why it’s so important to get the basic things right.

Azure Synonyms Service – How NOT to do REST.

Recently, I looked on the Azure data market place (or whatever it’s called) to see what sort of data services are available there. I didn’t find anything super compelling. There were a few premium, for-fee services that sounded potentially interesting but nothing that I felt like spending money on before I could try things out.

As I was perusing, I found a synonyms service. Nice, but this is nothing earth-shaking. There are already a number of viable, programmable synonyms services out there. Surely Thesaurus.com has one. I think Wolfram Alpha has one. Wordnik has one. BigHugeLabs has one that I integrated with emacs. But let’s look a little closer.

Let me show you the URL structure for the “Synonyms” service available (as “Community Technical Preview”!) on Azure.


https://api.datamarket.azure.com/Bing/Synonyms/GetSynonyms?Query=%27idiotic%27

Oh, Azure Synonyms API, how do I NOT love thee? Let me count the ways…

  1. There’s no version number. What if the API gets revised? Rookie mistake.
  2. GetSynonyms? Why put a verb in the URI path, when the HTTP verb “GET” is already implied by the request? Useless redundancy. If I call GET on a URI path with the word “Synonyms” in it, then surely I am trying to get synonyms, no?
  3. Why is the word Bing in there at all?
  4. Notice that the word to get synonyms of, must be passed with the query param named “Query”. Why use Query? Why not “word” or “term” or something that vaguely corresponds to the actual thing we’re trying to do here? Why pass it as a query param at all? Why not simply as part of the URL path?
  5. Also notice that the word must be enclosed in quotes, which themselves must be URL-encoded. That seems like an awkward design.
  6. What you cannot see in that URL is the authentication required. Azure says the authentication is “HTTP Basic Auth” which means you pass a username and password pair, joined by a colon then base64 encoded, as an HTTP Header. But… there is no username and password. Bing/Azure/Microsoft gives you an API Key, not a user name. And there’s no password. So you need to double the API key then base64 encode *that*, and pretend that it’s HTTP Basic Auth.

If readers aren’t persuaded that the above are evidence of poor API design, then you might consider heading over to the API Craft discussion group on Google Groups to talk it over.

Alternatively, or in addition, spend some time reading “the REST Manifesto,” Roy Fielding’s thesis paper, specifically chapter 5 in that document. It’s about 18 printed pages, so not too big a commitment.

The problem with releasing a poorly-designed API, is that it can do long-term damage.
As soon as a real developer takes a look at your service, he will not walk, he’ll RUN away to an alternative service. If your API is a pain to use, or is poorly designed, you are guaranteed to drive developers somewhere else. And they won’t come back! They might come just to poke around, but if they see a bad service, like this Synonyms service, they will flee, never to return. They will quickly conclude that you just don’t get it, and who could blame them?

So learn from Azure’s mistakes, and learn from the success of others. Take the time to get it right.

And now a word from my sponsor: Apigee offers a Rapid API Workshop service where we can send in experts to collaborate with your team on API design principles and practice. Contact us at sales@Apigee.com for more information.

US balks at ITU Treaty; Government vs Big (Internet) Business

At the WCIT in Dubai, the US has refused to sign the ITU Treaty establishing global regulations for dealing with the Internet.

ITU is an UN agency, part of a global government or quasi-government effort. This isn’t conspiracy theory; this is just fact. The UN attempts to establish rules governing global behavior.

A number of governments, including China, Russia, Iran, Saudi Arabia, many African nations, and other “closed” societies have voted in favor of the ITU Treaty, which includes provisions for how the Internet can be regulated. In short, these are not lovers of liberty. The concerns about their interests in restricting the public Internet are not new.

The US has voted against the treaty. Certainly the commercial interests in the US, including companies like Google who are focused on advertising, and content-providers like Disney, are against the regulation of the global network as it restricts their access to vast new oceans of potential customers. I can understand the reluctance of some countries to open up to the “assault” on their people of US-oriented commercial interests via the public network. I can understand it but I don’t condone it.

I am flatly against the mullahs in Iran approving what’s ok for me to read on the Internet in Seattle, Washington. I also think that, without democratic elections, these people and their counterparts in Saudi, Russia, China, and many other countries, have no moral authority to make similar decisions for the people that live in their respective countries.

Let the closed societies try to regulate the Internet within their borders if they want to. It won’t work. As we see in Iran, China, and other places, the information will get in and out, eventually. People want the freedom to educate themselves. The US should not back up one bit from its stance on this.

The Internet is a much more effective spreader of democracy than 3 battalions of US Marines.

Related: Dvorak on the ITU Treaty