I'm often accused of hating Javascript. This is not entirely true: I no no real objection to the language itself (it's nothing special, just another run-of-the-mill scripting language, albeit with some good XML support) -- I just hate how Javascript is used.
The problem fundamentally ties into that Javascript is code. It is a program, supplied by a website, that is interpreted and run by the browser on my system. Automatically. Without approval or say-so.
The justification for this approach is that this extends the capabilities of the browser in new and innovative ways. And, indeed, there is a grain of truth in that assertion, and the argument is not entirely without merit.
However, much of what Javascript is used for is not to extend the capabilities of the browser -- it is to re-implement existing features, or to disable existing features, or to offload processing from the server.
One of the most common (ab)uses of Javascript is to replace the target in an anchor's href, replacing the url with a bit of code that directs the browser to go to a ... (you guessed it) url. This is one of the most annoying and useless uses of Javascript, because "normal" browser behavior is to do just that.
Of course, this breaks the "copy link" feature of many browsers, as what you get in the cut buffer is a bit of Javascript code with an embedded URL.
Sometimes the Javascript code fragment doesn't exactly duplicate the behavior of an anchor with an href, but it attempts to open the link in a new window. This is worse than useless, as it breaks the principle of least suprise (if you're not suprised, you ought to be): the user should expect that clicking on a link would cause the current "page" to change to the target page.
Should they want to open up a new browser window, they have a way already; every browser that can open up a new window in response to the Javascript direction to do so can ALSO open up a link in a new window by a user's action.
To regain the expected behavior, the user is required to cut-and-paste the URL from the new window back into their original browser window. This is a significant annoyance at best, and a drain on system resources and user patience otherwise.
Fundamentally, this sort of use of Javascript reflects an arrogance on the part of the web-developer/"programmer", and a disdain for the user. The user is too dumb to know that they really want to open a new window, and if they don't, well, the "programmer" knows best how the user should be using the web-page. The arrogance and disdain may be unconcious, but I assert that it's there: the "programmer"'s preferences trumps the user's.
This arrogant attitude is made even MORE obvious when the web-developer decides that certain features of the browser need to be disabled.
Why in the world do users put up with such a thing?
The most annoying form of "disabling" are those tricks that briefly show you the page (yes, you can see the data without javascript) and then redirect the page to a "you must use javascript" notice.
(And, indeed, they may not; I have not heard of anyone complaining of this in a long time, so perhaps it is becoming less of an issue these days.)
Another use for Javascript is to offload some of the responsibility from the server to the client. This isn't so much arrogance, I think, as simple laziness.
A developer may choose to push some of the computation from the server to the client. The computation is for the client's benefit, and the server is presumably busy, so reducing the load on the server and increasing the load on the client can make sense. Of course, since the computation is beind done with a scripting language, the actual amount of work increases, but that's a cost paid for by the client.
Given the habit of pushing computation to the client, it then becomes very easy to push verification and validation tasks to the client, using the same justification. And it's here that a developer's laziness transmutes into stupidity.
Verification and validation of user data should always, without exception, be done on the server. Even if the Javascript verifies the user's input and validates that all values are in the correct range, the server still MUST check those values again. Failure to do so constitutes a potential breach of security, and at least an integrity failure.
The very justification for pushing this logic to the client side means that there's pressure to remove it from the server side. If performance is a little slow, that's an easy thing to remove to lighten the load. If not by the original developer, then by a junior programmer who's been tasked with speeding up the production server.
Additionally, the check will be done in two places, which might well lead to the checks getting out of sync. (I've seen this happen, with overlapping ranges. The test cases passed, but some input fields were limited to exactly one value.)
Once client-side verification and validation have become established procedures, it's then only a small step to client-side authentication. If the security folks go nuts at the idea of client-side validation, they just give up at this stage, and stand back and laugh.
"That would never happen," you say, "even an idiot who doesn't understand why client-side validation is bad would recogonize that client-side authentication is stupid."
You'd be wrong.
Most recently, I've seen a widespread "enterprise-class" web-based timesheet system, used by dozens of DoD contractors, implement a client-side authentication scheme. Gaining admin access to the timesheet system is done with Javascript -- on the client.
So what's the real problem?
Basically
How to get where we should be from where we are pretty much falls out of the problem list. Solve the problems in an intelligent way, and we're golden, right?
When we find a situation where doing something cleanly and easily is difficult-to-impossible to accomplish with plain old HTML, we may encounter the need to hack up some Javascript to accomplish what we want to do. If it's really that useful of an improvement, we need to update the HTML standard so that we no longer need our workaround.
We need to be able to encapsulate and share changes to Javascript code. Most folks lack the time, inclination, or interest in sanitizing code; it becomes a tedious exercise, repeated by every user. We need to be able to have a trusted third party inspect, repair, and package a Javascript codebase so that other users can use the blessed code.
This plays well into the corporate situation -- install browsers with no (unmediated)Javascript allowed, and let the security administrators look at the Javascript for those sites that require it, and then package up the "secured" Javascript for installation on the users' browsers.
Better proxies that can rewrite or remove unblessed Javascript would help with this a lot, and would solve the problem faster. Instead of forcing each browser to change, we can make it a moot point, at least in the corporate environment.
One of the really annoying sites is ShareBuilder. They've demonstrated utter cluelessness by taking a working site, and making sure that it doesn't work unless you have javascript installed. They considered this to be some sort of improvement, even though it is an action that demonstrates incompetence. Note: I was informed that this is no longer true, but my own tests show that javascript is still required.
Home Depot solicits opinions of their customers via www.homedepotopinion.com -- which requires Javascript to be enabled to take the survey, and which further requires Javascript to be enabled to provide feedback.
Also, check out The Daily WTF -- specifically this artiicle -- for an example of lame Javascript implementations. Note that all the validation is on the client-side.
The original version of the essay was much shorter and more a result of frustration. As you can see, I've moderated my position somewhat over time, mostly from discussion in the #kernel-panic IRC channel on freenode.
Find me there to help me refine my position, add examples, or otherwise improve this essay.