Tuesday, July 17, 2007

More Efficient User Interfaces

Over the weekend I ran into the following video about changing user user interface paradigms. I found the language based interface really interesting, but didn't much care for the ZUI. Unfortunately, the language based interface reminds me of old games from about the time personal computers became powerful enough to run Eliza-like software (for hackers, go here for code, and here for an easy environment on Windows). Basically you typed in "natural language" commands, and the characters were supposed to carry them out. Personally, I thought it was about the most awful way to play a game. Maybe in the past couple decades technology has progressed. It looks like it has potential. Anyway, the first thing that struck me was it reminded me of SciFi books where they describe any advanced computer use "programming," possibly because I largely stick to classic SciFi. Of course the vast majority computer users, who indeed perform relatively complex tasks, are not progammers. Not by a long shot. But I think the language-based interface concept could bring "programming" one step closer to the masses (whether that is good or bad is an excercise for the reader). The second thing I noticed was a stricking similarity to SmallTalk, only SmallTalk is a lot better. You see, if all of your software was inside of a SmallTalk image, then you could easily interact with it programmatically. Furthermore, SmallTalk's syntax and dynamic nature make it fairly approachable for those who have not been corrupted by years of experience with less lofty languages (in other words, me). Given a "humanized," if you will, object model I could see some real potential. At a minimum it would be a realization of much more componentized software. Alas, we can dream... The paradigm for such a system would be something like this: Someone builds an API that translates human-language like typed commands into Smalltalk code. It could quite possibly be an internal DSL. Then you build the interface so that an easy keystroke let's you "talk" to the computer by typing. AI would be optional, but I think essential for really advanced use by lay users. As time progresses, the user and the computer would learn more and more complex tasks - the user would essentially be programming the computer, but it would feel more like training. Ultimately this could facilitate much more efficient and sophisticated computer use. Maybe. There are counter points. Paul Murphy seems to think that everyone should learn keyboard shortcuts and Unix command-line utilities. While I agree in the utility of both, somehow I don't think the every-day computer user is ever going to do it. The more convincing counter point was made by my wife, an attorney, who wasn't even really trying to make a counter point. The exchange went something like this: Me: I saw this video the other day about changing user interface paradigms to consist of unified functionality instead of distinct applications. Her: Why would you want that? Me: Well, difference applications do different things, and so often times you have to use more than one to get it done. Her: Word is complicated enough already. Wouldn't that make it more complicated? Me: Yeah, that's why this guy advocated switching to text-based commands instead of menus and dialogs and stuff. Her: Huh? Me: You know how Word has a ton of menus, and those have sub-menus, and those launch dialog boxes with a bunch of tabs in them? Her: Yeah. It's confusing. Me: Well, if there were easy text commands that resembled natural language, then you could just tell the computer what to do and not worry about the menus. Her: Wouldn't that mean I have to learn all that? What's wrong with what we've got. It works. I know how to use it. Why change? People won't like the change. It will confuse them. Me: Well, how about intelligent search.... And go on to another defeat of the engineer. But that's the point. We dream of novel new things. We dream of users who care to really learn applications. We get people who want consistency. They've learned it, and they don't want to learn it again. So why is this important? Well, two reasons. One is that people are amazingly hampered by most application. By people, I mean everyone. How many of you have spent hours fighting Word or PowerPoint to get some document to look right? I know I've done it plenty of times. That's wasted time, and time is money. But there's a more important reason. I think we've hit a plateau in terms of technology innovation. We have all this shiny new processing power and networks, and most "innovation" seems to consist of thing like "social networking" (many were doing that a 1200bps or less years ago), sharing pictures (ditto), playing media (replacing TV/VCR isn't innovation), and other such pursuits that while may be very beneficial, hardly represent technology advancements. Why? There are a lot of reasons. I think one of them is that computers are so complicated already that getting a user to accept them doing something complicated is incredibly low. You can do it in specialized fields that have a technical leaning (science, engineering, finance, economics), but doing it for others is another ballgame. In the long run, in order to keep innovating, we have to be able to bring innovative software to people who today do not want it, because what they have today is complex enough. We have to change the user interface paradigm, and we have to do it in a way that doesn't scare people who already "know something" away.

Sphere: Related Content

Monday, July 16, 2007

Sun Ray Thin Clients

Last week I made a comment on Paul Murphy's blog about how the thin-ness of Sun Rays is really up to interpretation. Today he's decided to dedicate an entire blog to explaining why I'm wrong, because he figures if I have an incorrect understanding of Sun Rays, then a lot of people have an incorrect understanding of Sun Rays. He's probably right, although I don't think my understanding is that far off base, and he's been kind enough to let me see a draft copy of his blog so I can get a head start on the response. Here's what I said:

Smart, Thick, Thin, Display

It's all word games. Depending on how you define "processing," there is processing going on. It still has to render graphics, translate keyboard and mouse events, etc. A SunRay is just a compacted Sun workstation of yesteryear without a harddrive and special firmware designed to work solely as an X-Windows server.

The problem is the attempt to make "smart displays" seem more fundamentally different from other similar solutions just muddies the waters. People like me groan because yet another term has been introduced that means almost the same as other terms that will need to be explained to the higher-ups. The higher-ups get confused and either latch onto it or, more likely, have their eyes glaze over.

Anyway, enough with our industry's incredible ability to make sure words are completely meaningless...

The problem with Sun Ray and other similar solutions is that they are really a local optimum based on today's technology and practices for a relatively narrow range of priorities. Change the priorities and the solution is no longer optimum. Introduce distributed computing techniques with the same low administrative overhead and they lose out entirely.

As far as I can tell, the first part is technically accurate. Older Sun Rays ran a 100Mhz UltraSparc II, had 8mb of RAM, and ran a microkernel. See here and here. Newer ones use an even beefier system-on-a-chip.

So the Sun Ray client is obviously processing something, and actually has a fair amount of processing power. Just because it is not maintaining any application state, doesn't mean it's not doing anything. Murph asserts that a Sun Ray is not an X-Terminal, but he'll have to explain the difference to me. He could be right...I don't know. It's been about 7 years since I've used a Sun Ray, but from what I remember it felt just like using Exceed on a PC under Windows, which is quite common at my employer. He did mention this:
Notice that the big practical differences between the Sun Ray and PC all evolve from the simplicity of the device in combination with the inherently multi-user nature of Unix. In contrast the differences between the Sun Ray and X-terminal arise because the X-terminal handles graphics computation and network routing -making it more bandwidth efficient, but marginally less secure.
But the Sun Ray quite clearly has a graphics accelerator and talks over the network, so while there is probably a subtle difference in there that I'm not grasping, it doesn't seem particularly marterial. But that's not really the meat of the debate, it's just a technical quibble over what consitutes processing and an operating system. He's dilluting the debate by calling Sun Ray's "smart displays" instead of "thin clients" and thus drawing a false dichotomy, and I'm doing the same by pointing at internal technical specs that have little to do with actual deployment. The real debate is: "Where should processing take place?" I'll give you a contrite answer - as close to the data as possible. Any computation involves a set of inputs and a set of outputs. It makes no sense to shuttle a million database rows from a database server to an application server or client machine in order to sum up a couple fields. It makes much more sense to do it where the data is, and then ship the result over the network. Likewise, if you have a few kilobytes of input data and several megabytes/gigabytes of results, it makes sense to do the computation wherever the results are going to be needed. So this is my first issue with the centralized computing paradigm. Right now I'm typing this blog in Firefox on Linux, and my computer is doing a fair amount of work to facilitate that interaction with Blogger. I've also got a dozen other Windows open. Most of the memory and CPU I'm consuming is dedicated to the local machine interacting with me, the local user. Only a couple pages of text are being exchanged back-and-forth with blogger. So why not let the Sun Ray run Firefox (and an email client, a word processor, etc.)? The new ones have the processing power. They probably would need $100 worth of RAM or so to keep a stripped-down Unix variant in RAM, which could be loaded from the network. Intelligent configuration could make the client smart about whether to run an app locally, on a server, or on an idle workstation down the hall. Murph gives seven reasons: 1. portability Murph asserts that with Sun Rays you gain portability, because you can halt a session one place and immediately resume it another place. I don't doubt that is true, but I don't see any technical reason why the same could not be accomplished with a distributed architecture. All that happens is your terminal becomes the processing server for a remote application. Remember, in Unix, there isn't a fundamental difference between a client and a server. I'm not going to address the laptop debate right now. Murph has made some very good arguments against laptops in the past based on the security concerns of them being stolen, despite strong encryption. I think he underestimates the value of laptops and is probably wrong, but there are a substantial number of people who could live with a "portable terminal" because their homes and hotels have sufficient bandwidth. 2. reliability This is where the distributed model really shines. In my experience, networks just are generally one of the less reliable portions of the computing environment, especially WANs and my own internet connection. A pure thin-client solution simply stops working when the network goes down. In the past, Murph has asserted that everyone needs network connectivity to work, so this doesn't matter. But in my opinion most professionals can continue working for several hours, possibly at reduced productivity, when disconnected from the network. That buys time for IT to fix the network before the business starts bleeding money in terms of productivity. Keeping processing local, along with caching common apps and documents, increases the effective reliability of the system. 3. flexibility Murph lists nothing that cannot be done with a locally-processing workstation. 4. security Don't use x86 workstations, especially running Windows. The security gains are from a more secure operating system on a processor architecture designed for security and reliability. Eliminating permanent storage from the client does buy some security, because there is then no way to walk out the door with all the data, but distributed processing doesn't preclude centralized permanent storage. There are, of course, substantial advantages to having local storage, like being able to make a laptop that can be used in an entirely disconnected fashion. But I think that's a separate debate. 5. processing power There's nothing about a distributed computing model that says you can't install compute servers. Heck, this is done all the time with Windows (both to Windows servers and more commonly to Unix servers). Murph's example of a high-performance email server has nothing to do with the thin-client architecture, and everything to do with properly architecting your mail server. 6. cost There aren't significant cost savings in terms of hardware when switching to Sun Rays. Hardware is cheap, and you can throw out a lot of pieces in the common PC to reduce the cost. In fact, I bet Sun Rays cost more because of the servers. I don't doubt that when effectively administrated they cost less to keep running than a Windows solution, but that's mostly because of Unix. I'll admit that it is probably cheaper to administer Sun Rays than my distributed model because I think it will require greater skill and discipline (meaning higher paid admins), so in abscense of detailed numbers I'll say it's a wash. 7. user freedom This is partially a consequence of using Unix instead of Windows, and mostly a consequence of changing culture. So as I said before, Sun Rays, and centralized computing in general, represent a kind of a local optimum for a given solution and today's practices. But I don't think they make a solid generalized approach. Distributed computing can be successfully with all the advantages of Murph's Sun Ray architecture using today's technology, it just isn't common. Now I've ignored the elephant in the room: Much essential software only runs on Windows, and the minute you introduce Windows into the mix (local or centralized), you start compromising many of the advantages outlined above. Of course, what good is a computing environment if it won't run the desired software? Consequently, I think it will be a long time before anything like this flies in most enterprise environments.

Sphere: Related Content

Monday, July 02, 2007

Creators, Hounders, and Processors

There are three types of people in a corporate environment:

  1. Creators
  2. Hounders
  3. Processors
These are really a continuous spectrum, and people can occupy more than one location on the spectrum, but I think those are the large buckets into which most people fall. I'm very tempted to say that creators are the people who produce most of the value, but that probably would be my creator-self talking. Creators are the people who create something of value from almost nothing. They are the engineers, scientists, researchers, and marketers who create the ideas that advance an organization. They are both the most coveted and most annoying employees, and generally compse a very small percentage of an organization. Within an IT organization, creators tend to be architects, analysts, and software developers; although most people in those roles aren't really creators. I'll talk about hounders later. Processors are the people who take something, apply some known method to it, and spit out something of greater value. In law, a litigator is most likely a creator. The guy who puts together your will is a processor. A processor's job is to keep things neat, complete, well organized, and most of all conformant to any important rules and regulations. Without processors we would live and work in total chaos. Processors are the people who take the mess produced by creators and turn it into something that is geniunely useful and widely appliable. They are the people who continually apply it so that is becomes part of our lives. In IT, QA people, system administrators, database administrators, application administrators, configuration managers, help desk staff, and countless others are generally processors. Management often wishes everyone (except them) could be a processor, because they are predictable and their work usually has immediately apparent value. Processors make up a very large portion of most organizations. So creators like shiny new things. They hate forms and anything else that seems to constrain their ideas. They only really believe in deadlines for other people, because creativity cannot be hurried, and often times even have trouble pushing deadlines on others because the last thing they want are half-baked ideas. They don't want to be bothered with the extraneous details required by processors. Processors often claim to believe in deadlines, and to be very predictable in the time it takes to process work. However, they exclude from this time they spend waiting for responses that the need from others. If you don't believe me, go see how many help desk tickets your IT organization has that are in a state like "waiting for user response," quite likely to a question like "Is your computer plugged in?" The result is processors are only concerned about deadlines when they can't blame someone else, and creators are prima donnas who can be bothered neither by deadlines nor by a processor's need for additional information. Consequently creators are the perfect scapegoat for processors (he hasn't responded to my email asking for more information), and processors for creators (I sent it to him last week, what do you mean he needs more information? I'm busy. It's his fault this is late, not mine.) Now enter hounders to solve this problem. They exist to overcome the impedance mismatch between creators and processors, and they may be as numerous as the other two groups combined. Salespeople are all hounders. Most managers are hounders. Everyone who emphasizes (or randomly inserts) the word "manager" in their role or title is a hounder. Hounders are the people who constantly talk about action items and religiously send out meeting minutes. They are also often the people who lack the technical skills to be creators and detail orientation to be processors. Just like you need some bacteria in your body, you need some hounders in your organization. But too many is a sign that your organization is diseased, as most don't directly produce anything. Also, without day-to-day observation, it can be very difficult to distiguish a hounder from a creator or processor, because they tend to confuse (intentionally or otherwise) making somebody do something with doing it themselves. And they tend to make organizational problems worse. Often times if the creators and the processors are having a really hard time communicating, it's because one or both sides has been allowed to become too extreme. When processors get in trouble, their first instinct is to create more rules for getting tasks out of their queue and off their clock. Creators, when they get in trouble, tend to avoid disclosing details, especially when they aren't yet sure of them. Consequently, all the hounder has to do is encourage these behaviors, and the hounder becomes (in the short term) more critical to the organization, because now the only place the processor can get his details is from the hounder, who either tricks the creator into providing them or takes responsibility for any problems that may arise. In extreme circumstances, processors may even stop communicating with other processors, by simultaneously declaring items in each-others' queues, making hounders necessary to make what should be a smooth interaction work. All three types are needed in any organization. When properly balanced they compliment each other. I've picked on hounders here, because I think they are the most opportunistic, but creators and processors can be quite detrimental as well when they gain too much influence. P.S. Now, you might be wondering what happens to the people who lack the technical skills and creativity to be creators, detail orientation and technical skills to be a processor, and the personality to be a hounder. Some fake it, usually by obscuring their lack of skill or creativity, for example by parroting something they heard/read or calling "buy IBM" an IT strategy. It's amazing how smart a person will think you are if you consisntely agree with them.

Sphere: Related Content