Over the weekend I ran into the following video about changing user user interface paradigms. I found the language based interface really interesting, but didn't much care for the ZUI. Unfortunately, the language based interface reminds me of old games from about the time personal computers became powerful enough to run Eliza-like software (for hackers, go here for code, and here for an easy environment on Windows). Basically you typed in "natural language" commands, and the characters were supposed to carry them out. Personally, I thought it was about the most awful way to play a game. Maybe in the past couple decades technology has progressed. It looks like it has potential. Anyway, the first thing that struck me was it reminded me of SciFi books where they describe any advanced computer use "programming," possibly because I largely stick to classic SciFi. Of course the vast majority computer users, who indeed perform relatively complex tasks, are not progammers. Not by a long shot. But I think the language-based interface concept could bring "programming" one step closer to the masses (whether that is good or bad is an excercise for the reader). The second thing I noticed was a stricking similarity to SmallTalk, only SmallTalk is a lot better. You see, if all of your software was inside of a SmallTalk image, then you could easily interact with it programmatically. Furthermore, SmallTalk's syntax and dynamic nature make it fairly approachable for those who have not been corrupted by years of experience with less lofty languages (in other words, me). Given a "humanized," if you will, object model I could see some real potential. At a minimum it would be a realization of much more componentized software. Alas, we can dream... The paradigm for such a system would be something like this: Someone builds an API that translates human-language like typed commands into Smalltalk code. It could quite possibly be an internal DSL. Then you build the interface so that an easy keystroke let's you "talk" to the computer by typing. AI would be optional, but I think essential for really advanced use by lay users. As time progresses, the user and the computer would learn more and more complex tasks - the user would essentially be programming the computer, but it would feel more like training. Ultimately this could facilitate much more efficient and sophisticated computer use. Maybe. There are counter points. Paul Murphy seems to think that everyone should learn keyboard shortcuts and Unix command-line utilities. While I agree in the utility of both, somehow I don't think the every-day computer user is ever going to do it. The more convincing counter point was made by my wife, an attorney, who wasn't even really trying to make a counter point. The exchange went something like this: Me: I saw this video the other day about changing user interface paradigms to consist of unified functionality instead of distinct applications. Her: Why would you want that? Me: Well, difference applications do different things, and so often times you have to use more than one to get it done. Her: Word is complicated enough already. Wouldn't that make it more complicated? Me: Yeah, that's why this guy advocated switching to text-based commands instead of menus and dialogs and stuff. Her: Huh? Me: You know how Word has a ton of menus, and those have sub-menus, and those launch dialog boxes with a bunch of tabs in them? Her: Yeah. It's confusing. Me: Well, if there were easy text commands that resembled natural language, then you could just tell the computer what to do and not worry about the menus. Her: Wouldn't that mean I have to learn all that? What's wrong with what we've got. It works. I know how to use it. Why change? People won't like the change. It will confuse them. Me: Well, how about intelligent search.... And go on to another defeat of the engineer. But that's the point. We dream of novel new things. We dream of users who care to really learn applications. We get people who want consistency. They've learned it, and they don't want to learn it again. So why is this important? Well, two reasons. One is that people are amazingly hampered by most application. By people, I mean everyone. How many of you have spent hours fighting Word or PowerPoint to get some document to look right? I know I've done it plenty of times. That's wasted time, and time is money. But there's a more important reason. I think we've hit a plateau in terms of technology innovation. We have all this shiny new processing power and networks, and most "innovation" seems to consist of thing like "social networking" (many were doing that a 1200bps or less years ago), sharing pictures (ditto), playing media (replacing TV/VCR isn't innovation), and other such pursuits that while may be very beneficial, hardly represent technology advancements. Why? There are a lot of reasons. I think one of them is that computers are so complicated already that getting a user to accept them doing something complicated is incredibly low. You can do it in specialized fields that have a technical leaning (science, engineering, finance, economics), but doing it for others is another ballgame. In the long run, in order to keep innovating, we have to be able to bring innovative software to people who today do not want it, because what they have today is complex enough. We have to change the user interface paradigm, and we have to do it in a way that doesn't scare people who already "know something" away.
Sphere: Related ContentTuesday, July 17, 2007
More Efficient User Interfaces
Posted by Erik Engbrecht at 11:08 PM
Labels: innovation, user interface
Subscribe to:
Post Comments (Atom)
2 comments:
You are right. People want to stick with what they know.
People were enticed to go to the trouble of learning to use computers because the benefits outweighed the pain. Now we are stuck with the GUI paradigm until something else comes along that offers the ability to do what you want and whose benefits outweigh the learning pains.
Regards
bportlock
Hi nice reading your blogg
Post a Comment