...or why Java infects Unix with the Windows mindset.
Recently Paul Murphy, the king of the Sun zealots, blogged about Java bringing the Windows mentality to Windows, all the while slamming Java. In response, John Carrol, a Microsoft employee, rose to the defense of Sun's self-declared crown jewel. Talk about weird.
The funny thing is they are both right, although Murph's arguments are pretty weak.
A little history
Unix and Windows evolved with a very different definition of what the primary unit of isolation should be. On Windows, it is (or was) the node. Each Windows user (and DOS user before him) occupied exactly one node. The worst that could happen is the user destroys his own workspace, so interactive performance reigned supreme over system integrity. You have a node. You have a user. The node does what the user wants as fast as it can. Initially this applied to running a single application at a time, then to allowing several to be open at once but with the one in the foreground receiving primary resources, and finally to allow several applications to run simultaneously. Multithreading reigned king because it was lower overhead and focused on making that foreground process more responsive. Threads were optimized, while processes were neglected.
Unix evolved to be fundamentally multiuser, and its primary unit of isolation is the process. Unix systems were intended to be shared, so it was important that one user could not dominate over another. Furthermore, and slew of processes (daemons) all ran as the same under the same account, while providing services to multiple users, so in order for users to share processes must share. Unlike on Windows, one process crashing the entire system was not acceptable, because that would destroy multiple users' data. As a result, processes were designed to represent a strong level of isolation and heavily optimized to make sure people used it. Threads were largely ignored, or simply treated as processes with a shared heap space, because several cheap processes could simply be chained together to accomplish the same thing in a simpler manner.
The Unix Way
I want you to consider good old-fashioned CGI programs for a moment. Imagine one written in C. First, you may think "Oh my God, running a web application in a non-managed environment. The resource leaks! The memory leaks! The memory consumption of all those processes! Oh the horror!." Of course, you would be wrong. Repeating launching and terminating a Unix process is dirt cheap. Especially a simple program written in C. The OS will cache an image of the executable in memory which can be shared among invocations. The individual process can leak all the resources it wants, because as soon as it terminates all the resources will be automatically freed by the OS, not matter how incompetent the programmer. If the process fails to terminate your friendly neighborhood sysadmin can kill it without hurting any other process.
This method works for producing super-available applications despite incredibly crappy code. I've seen it, both in the for of CGI and in the form of much more sophisticated applications. It works. Users get upset about lost transactions, but the application as a whole almost never goes down.
Enter Java
Java took cheap Unix processes and made them expensive. To compensate, it provided primitives for multithreading. It provided a garbage collector to at least slow memory leaks. It turned all those transient application processes into one big JVM process not only serving all the transactions for a given user, but serving all the transactions for an entire application or even multiple applications. Java made it more difficult to make destructive program errors, but it also made the consequences much more severe. Your friendly neighborhood sysadmin is powerless against a runaway thread or a slow memory leak. All he can do is kill the process, bumping out all of the users, killing all of their sessions.
It's so bad, the process might as well be a node. Unix becomes Windows. The JVM is practically an operating system, but without all of the features of an operating system and a whole lot less mature.
Enter Java Frameworks
This is really what Murph was railing against, although he didn't name it and he conflated it with the core language by labeling "Business Java." Frameworks evolved for a myriad of reasons which are often summarized as "taking care of the plumbing to the developer can focus on the business logic." The "plumbing" is a lot of things, including managing certain resources and generally ensuring the application code executes within a well defined life cycle where it is unlikely to do damage. In other words, instead of giving the user a simple, uniform mechanism like a process to protect the world from his mistakes, he is given dozens of hooks where he can implement little snippets of focused and hopefully bug-free functionality. All this involves a lot of learning above and beyond "the Java you learned in school" (meaning the core language and libraries), putting a cognitive load on the programmer and additional runtime load on the machine.
Multiprocess versus Multithreaded
Most Unixes have evolved efficient threading, and Windows has come a long way in becoming a multiprocess, multiuser environment. Consequently, developers needs to be able to intelligently decide when to use multiple processes, when to use multiple threads, and when to use a hybrid approach. For example, Apache httpd has for quite a while now used a hybrid approach. One one hand on most operating systems threads involve less overhead than processes, so it is more efficient to use multiple threads than multiple processes. On the other hand multiple processes ultimately will give you better reliability because they can be spawned and killed independently from one another, so making a system that can run for months without stopping doesn't require writing a program that will run for months without stopping.
So how do you choose? My rule of thumb is to look at the amount of shared data or messaging required between concurrent execution paths and balance against how long the "process" (not OS process) is expected to live. Execution paths with lots of shared data or that are chatty will benefit from the lower overhead of threading, and threading allows you to avoid the complexities of shared memory or IPC. Of course, multiprocessing allows you to avoid the complexities of threading APIs, and there are libraries to address both, so the complexity issue could be a wash depending on your previous experience.
So why is Murph so wrong? Is JC right?
I think Murph wants to divide the world along nice clean lines. System programmers program in C. They don't need the hand-holding of managed runtimes or languages that treat them like impudent children. They do need lots of flexibility and lots of rope. Application programmers, on the other hand, need high-level abstractions that are close to the business domain that they are addressing. They need to be able to rapidly build software and rapidly change it as requirements evolve. They don't need lots of flexibility and should stay away from low-level details. So, in Murph's eyes, the problem with Java is that it doesn't do either particularly well. The managed runtime and object-orientation get in the system programmer's way, while the general-purpose nature of the language and mish-mash of libraries and frameworks just confuse application developers, or rather distract them from their true purpose. System programmers need C. Application developers need 4GLs.
The fatal flaw in Murph's reasoning is that it ignores the in-between. What happens when the systems programmer or 4GL creator fails to provide the right abstraction for the application developer? He's stuck, that's what happens. Software development is as much about creating abstractions as using them. Consequently, application developers need general-purpose languages.
Sphere: Related Content