Tuesday, December 28, 2010

Easier object inspection in the Scala REPL

If you're like me, you spend a fair amount of time poking at objects of poorly documented classes in a REPL (Scala or otherwise...). This is great compared to writing whole test programs solely for the purpose of seeing what something does or what data it really contains, but it can be quite time consuming. So I've written a utility for use on the Scala REPL that will print out all of the "attributes" of an object. Here's some sample usage:

scala> import replutils._
import replutils._

scala> case class Test(a: CharSequence, b: Int)
defined class Test

scala> val t = Test("hello", 1)
t: Test = Test(hello,1)

scala> printAttrValues(t)
hashCode: int = -229308731
b: int = 1
a: CharSequence (String) = hello
productArity: int = 2
getClass: Class = class line0$object$$iw$$iw$Test

That looks fairly anti-climatic, but after spending hours typing objName. to see what's there, and poking at methods, it seems like a miracle. Also, one neat feature of it is that if the class of the object returned is different from the class declared on the method, it prints both the declared class and the actual returned class.

Code and further usage instructions is available on BitBucket here.

So what exactly is an attribute?

I haven't quite figured that out yet. Right now it is any method that meets the following criteria:

  1. the method has zero arguments
  2. the method is public
  3. the method is not static
  4. the method's name is not on the exclude list (e.g. wait)
  5. the method's return type is not on the exclude list (e.g. Unit/void)
  6. the method is not a "default argument" method
Please post any feedback in the comments here, or better yet in the issue tracker on Bitbucket.

Sphere: Related Content

Monday, December 27, 2010

Playing with Scala Products

Have you ever wanted something like a case class, that magically provides decent (for some definition of decent) implementations of methods like equals and hashCode, but without all of the restrictions? I know that I have. Frequently. And recently I ran into an answer on StackOverflow that gave me an idea for how to do it. The trick is to play with Products. I'm not sure how good of an idea it is, but I figure the Internet will tell me. Here's the basic idea:

import scala.runtime.ScalaRunTime

trait ProductMixin extends Product {
  def canEqual(other: Any) = toProduct(other) ne null
  protected def equalityClass = getClass
  protected def equalityClassCheck(cls: Class[_]) = equalityClass.isAssignableFrom(cls)
  protected def toProduct(a: Any): Product = a match {
    case a: Product if productArity == a.productArity && equalityClassCheck(a.getClass) => a
    case _ => null
  }
  override def equals(other: Any) = toProduct(other) match {
    case null => false
    case p if p eq this => true
    case p => {
      var i = 0
      var r = true
      while(i < productArity && r) {
 if (productElement(i) != p.productElement(i)) r = false
 i += 1
      }
      r
    }
  }
  override def productPrefix = try {
    getClass.getSimpleName()
  } catch {
    //workaround for #4118, so classes can be defined in the REPL that extend ProductMixin
    case e: InternalError if e.getMessage == "Malformed class name" => getClass.getName()
  }
  override def hashCode = ScalaRunTime._hashCode(this)
  override def toString = ScalaRunTime._toString(this)
}

Basic Usage

There's a couple ways to use ProductMixin. The first, and probably the simplest, is to extends one of the ProductN classes and provide the appropriate defs:

class Cat(val name: String, val age: Int) extends Product2[String, Int] with ProductMixin {
  def _1 = name
  def _2 = age
}

The second way is to provide them yourself:

 class Dog(val name: String, age: Int) extends ProductMixin {
  def productArity = 2
  def productElement(i: Int) = i match {
    case 0 => name
    case 1 => age
  }
}

Either way, the result is something that has many of the benefits of case classes but allows for more flexibility. Here's some sample usage:

scala> val c1 = new Cat("Felix", 2)
val c1 = new Cat("Felix", 2)
c1: Cat = Cat(Felix,2)

scala> val c2 = new Cat("Alley Cat", 1)
val c2 = new Cat("Alley Cat", 1)
c2: Cat = Cat(Alley Cat,1)

scala> val c3 = new Cat("Alley Cat", 1)
val c3 = new Cat("Alley Cat", 1)
c3: Cat = Cat(Alley Cat,1)

scala> c2 == c3
c2 == c3
res0: Boolean = true

scala> c1 == c2
c1 == c2
res1: Boolean = false

Dealing with Inheritance

Equality in the presence of inheritance is very tricky. But, possibly foolishly, ProductMixin makes it easy! Let's say you have a hierarchy, and you want all the classes under some root class to be able to equal each other. Here's how it would be done by overriding the equalityClass so that it is the root of the hierarchy (using a not-so-good example):

scala> abstract class Animal(val name: String, val age: Int) extends Product2[String, Int] with ProductMixin {
     | override protected def equalityClass = classOf[Animal]
     | def _1 = name
     | def _2 = age
     | }
defined class Animal

scala> class Dog(name: String, age: Int) extends Animal(name, age)
class Dog(name: String, age: Int) extends Animal(name, age)
defined class Dog

scala> class Cat(name: String, age: Int) extends Animal(name, age)
class Cat(name: String, age: Int) extends Animal(name, age)
defined class Cat

scala> val c = new Cat("Felix", 1)
val c = new Cat("Felix", 1)
c: Cat = Cat(Felix,1)

scala> val d = new Dog("Felix", 1)
val d = new Dog("Felix", 1)
d: Dog = Dog(Felix,1)

scala> c == d
c == d
res0: Boolean = true

The reverse can also be accomplished by overriding equalityClassCheck such that it checks that the classes are equal instead of using isAssignableFrom. That would mean two objects could be equal if and only if they are the same class.

Wrapup

I don't know to what extent this is a good idea. I've only tested it a bit in the REPL. It's neat, but there is one problem that comes to mind: performance. Any primitive members that are elements of the product will end up being boxed prior to usage in the hashCode and equality methods, the extra layers of indirection aren't free, and neither is the loop. That being said, case classes use the exact same method for hashing, so in what way they pay the same penalty, and unless code is really hashing and equality heavy it probably wouldn't be noticeable.

Sphere: Related Content

Thursday, August 26, 2010

Scala is for VB programmers

Normally I don't go for flame bait titles. But I haven't finished my morning coffee yet so I can't help myself. There's once again a debate raging across the internet about whether Scala is more or less complex than Java, along with the more nuanced argument that yes, it is more complex but only framework and library developers have to contend with this complexity. This argument usually happens when someone posts a little bit of code like this:

def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That

And then someone responds like this:

Why do not people realize that Java is too difficult for the average programmer? That is the true purpose of Scala, to escape the complexity of Java code! Framework code in Scala, with heavy use of implicit keywords and all kinds of type abstractions, is very difficult. This is correct, but this code is not meant for innocent eyes. You do not use that sort of code when you write an application.

I've seen this type of thinking before. A few years ago I had a bout of insanity and lead an ASP.NET project using ASP.NET 2.0. I had no .NET experience prior to this project. The project failed, although the reasons for that failure are legion and unimportant here. But I noticed something about ASP.NET developers: they have no clue how the technology works. It's a black box. Do you why? Because it is a black box. I searched and searched and couldn't even find a good illustration of the lifecycle for an ASP.NET page that's using events. This type of information is put front and center in the Java world. It's absolutely buried in the Microsoft world. Or at least the parts of it that target the hoards of VB programmers that are undyingly loyal to MS. The framework is magic dust created by the great wizards in Redmond so that you can build solutions for your customers. Do not question the dust. Think about VB. Or, no don't, it might damage your brain. My coffee hasn't quite kicked in, so I should have some immunity, so I'll do it for you. VB is a black box (well, at old school VB6). It was designed to allow people who do not know how to really program, and who will probably never know how to program, to create applications. It's completely flat, opaque abstraction. The dichotomy between the application developer and the framework developer is as high and as thick as the gates of Mordor.

There are many people in the Scala community that claim Scala's complexity can be hidden from the application program. I don't believe them, but there's a chance that they are right. It's technically feasible, and I can see how it could happen if Scala started attracting lots of VB programmers. I can't see how it's going to attract lots of VB programmers, but apparently many people in the Scala community think Scala is for VB programmers. So we'll just have to wait and see...

Sphere: Related Content

Sunday, August 22, 2010

Scala Actors: loop, react, and schedulers

One of the unfortunate aspects of many of the "published" (meaning blogged) Scala Actor benchmarks out there is that they rarely pay much attention, if any, to the affects of seemingly idiomatic patterns on performance. Some of the main culprits are:

  1. react versus receive (event-based versus threaded)
  2. loop/react versus recursive react
  3. loop/receive versus receive/while
  4. tweaking (or failing to tweak) the scheduler

I've been working on setting up a "benchmarking framework" in conjunction with experimenting with modifications to the underlying thread pool so that all the possible permutations are automatically tested. What I have right now is a classic "ring" benchmark setup to permute the schedulers and loop/react versus recursive react. The loop/react pattern is more idiomatic (or at least more common), but higher overhead, and it looks something like this:

loop {
  react {
    case Msg(m) => // do stuff
    case Stop => exit()
  }
}

The reason it is high-overhead is that both loop and react raise control flow exceptions that result in the creation of new tasks for the thread pool when they are hit, so for each loop, two exceptions are raised and two tasks are executed. There's overhead in both of the operations, especially raising the exceptions. The recursive react pattern looks like this, so it can avoid the extra exception/task:

def rloop(): Unit = react {  //this would be called by the act() method
  case Msg(m) => {
    // do stuff
    rloop()
  }
  case Stop => // just drop out or call exit()
}

Using loop instead of recursive react effectively doubles the number of tasks that the thread pool has to execute in order to accomplish the same amount of work, which in turn makes it so any overhead in the scheduler is far more pronounced when using loop. Now, I should point out that the overhead really isn't that large, so if the actor is performing significant computations it will be lost in the noise. But it's fairly common to have actors do fairly little with each message. Here's some results from the ring benchmark using 10 rings of 10,000 actors passing a token around them 100 times before exiting. I'm using multiple rings because otherwise there is no parallelism in the benchmark. These are being run on my dual core Macbook.

SchedulerReactMethodTime (sec)
ManagedForkJoinSchedulerLoopReact45.416058
ManagedForkJoinSchedulerRecursiveReact25.509482
ForkJoinSchedulerLoopReact65.268584
ForkJoinSchedulerRecursiveReact45.85605
ResizableThreadPoolSchedulerLoopReact98.084794
ResizableThreadPoolSchedulerRecursiveReact53.379757

The fork/join schedulers are faster than the ResizableThreadPoolScheduler because rather than have all of the worker threads pull tasks off of a single, shared queue; each thread maintains its own local dequeue where it can place tasks directly onto if they are generated while it is running a task. This creates a kind of "fast path" for the tasks that involves much less overhead.

I believe the primary reason ManagedForkJoinScheduler is faster because ForkJoinScheduler does not always leverage the "fast path," even when in theory it could be used. I'm unsure about some of the rationale behind it, but I know some of the time the fast path is bypassed probabilistically in order to reduce the chances of starvation causing deadlock in the presence of long running or blocking tasks. ManagedForkJoinScheduler escapes this particular issue by more actively monitoring the underlying thread pool, and growing it when tasks are being starved. The second reason, and I'm somewhat unsure of the actual degree of the affects, if that ForkJoinScheduler configures the underlying thread pool so that the threads work through the local dequeues in FIFO order, while ManagedForkJoinScheduler configures the pool such that the local dequeues are processed in LIFO order. Processing in LIFO order allows the pool to take advantage of locality with regard to the tasks, basically assuming that the last task generated is the most likely to use data that's currently in cache, and thus reduce cache misses.

The benchmark outputs a lot more information than I captured in the above table. If you'd like to run it, you can obtain the code here. The project uses sbt, so you'll need to have it working on your computer. After you run update in sbt to download all of the dependencies, you can run the ring benchmark as follows:

$ sbt
[info] Building project ManagedForkJoinPool 1.0 against Scala 2.8.0
[info]    using ManagedForkJoinPoolProject with sbt 0.7.4 and Scala 2.7.7
> ringbenchmark
[info] 
[info] == compile ==
[info]   Source analysis: 1 new/modified, 0 indirectly invalidated, 0 removed.
[info] Compiling main sources...
[info] Compilation successful.
[info]   Post-analysis: 79 classes.
[info] == compile ==
[info] 
[info] == copy-resources ==
[info] == copy-resources ==
[info] 
[info] == ringbenchmark ==
[info] RingBenchmark ManagedForkJoinScheduler LoopReact 2 ....output truncated...

You can tweak the benchmarks by modifying the sbt project file. If you do run them, I'm very interested in the results.

Sphere: Related Content

Saturday, August 21, 2010

Concurrency Benchmarking, Actors, and sbt tricks

Have you ever noticed that other people's microbenchmarks tend to be hard to run and often impossible to duplicate? And are frequently caveated to the hilt? When it gets down to it, a benchmark is really an experiment, and ideally a scientific experiment. That means all factors that are relevant to the results should be clearly recorded, and the tests should be easy for others to duplicate.

Custom sbt actions for benchmarks

In order to test and run benchmarks on the work I'm doing around creating a managed variant of the JSR-166y ForkJoinPool along with supporting infrastructure for use with Scala Actors, I'm creating a test harness that captures a host of environmental factors about how it was run, and writing sbt actions to make it easy to run the benchmarks and automatically permute the variables.

It still needs a lot of work, but I had some trouble figuring out a really basic task so I thought I'd share it. Basically I wanted to build a Task object that consists of several tasks based on information in the project definition and permuted parameters. It actually pretty easy, as you can see in the snippet below from my project definition:

  /** this task executes the PingPong benchmark using each available scheduler */
  lazy val pingpongbench = pingpongTaskList
  /** produces a sequence of run tasks using all the available schedulers  */
  def pingpongTaskList = {
    val pairs = 100
    val messagesPerPair = 10000
    val tasks = for(sched <- schedulers) yield pingpongTask(sched, pairs, messagesPerPair)
    tasks.reduceLeft((a, b) => a && b)
  }

You can see the whole file here. Basically Task has an && operator that essentially allows you to concatenate one task with another task. This allows you to build a whole chain of tasks. In the example above, I'm having it run the benchmark once for each scheduler configuration. Soon, I'm going to make it permute other parameters. But right now my test harness isn't playing nicely with the schedulers included in the Scala distribution, so first things first.

There's also one other little customization, which is documented, but I think it's important for benchmarking. By default, sbt runs your code in its own process. This can cause problems with multithreaded code, especially if it doesn't terminate properly. It also means the next benchmark to run has to content with any junk that the previous benchmark left around. So I configured sbt to fork new processes. It just required one line:

override def fork = forkRun

Important variables

Here's what I'm capturing for each run right now so that the results can all be dumped into a big spreadsheet for analysis. I'd like to capture more information about the host machine, such as more information about the CPUs and the loading when the benchmark is being run, but haven't got that far yet. Currently these are all captured from within the benchmark process, mostly using system properties and the Runtime object.

  1. Test Name - obviously needed so that results from multiple benchmarks can be stored in the same file
  2. Scheduler - this is my primary variable right now, I want to run each benchmark with each scheduler while holding everything else constant
  3. # of Cores/Processors - essential so that anyone looking at the results has an idea about the hardware used
  4. Java VM Name - different VMs can perform quite differently
  5. Java VM Version - performance characteristics change from version to version (usually getting better)
  6. Java Version - same reason as above, but this is probably the more publicly known version number
  7. Scala Version - this could be important in the future, as it becomes more common for different projects to be on different version of Scala
  8. OS Name and version - again, it can affect performance
  9. Processor Architecture
  10. Approximate Concurrency (number of simultaneously alive actors) - this allows us to examine concurrency levels versus resource consumption, more concurrency does not necessarily mean that more cores or threads would be helpful
  11. Approximate Parallelism (number of simultaneously runnable actors) - this measures how many cores/threads the benchmark can really keep busy
  12. Approximate Total Messages - this estimates the amount of activity that takes place during the benchmark, generally the benchmarks I'm looking at contain very little logic because they are intended to measure overhead introduced by the framework
  13. Total Wall Clock Time (seconds) - as measured using nanoTime within the benchmark process
  14. Initial Thread and Maximum Observed Thread Count - used to examine automatic expansion of the thread pool
  15. Initial Free Memory and Minimum Observed Free Memory - threads use a fair amount of memory, so performance impacts may show up as pressure on the GC as well has contention for the CPU
  16. Initial and Maximum Observed Total Memory - threads use a lot of memory, so it's important to track usage
  17. Verbose - debugging output pretty much invalidates any of these tests

Sphere: Related Content

Sunday, August 08, 2010

Improving Schedulers for High-level, Fine-grained Concurrency Frameworks

Quite a while ago, Greg Meredith made a comment that really made me stop and think, and that has been lingering in the back of my mind ever since:

Erik, Alex, John, et al, i'm loathe to hijack this thread -- which is a good one -- but the experience with the lock up described below is really just the tip of the iceberg. Unless specific scheduling contracts and semantics can be worked out, a pluggable scheduling architecture is just asking for disaster. Worse, it means that a whole host of compiler support that could be provided to apps that use actor-based concurrency is null and void because those solutions (be they model-checking or types-based) will critically depend on specific ranges of scheduling semantics. Actors and other higher level concurrency constructs are a major appeal of languages like scala. If they prove themselves to be "not ready for prime time" the impact on perception might not be contained to just that usage of the language. Best wishes, --greg
That wasn't the first time that Greg made a comment along those lines, and it certainly wasn't the last. At one point he offered to allow a few individuals look at some older code that he believed could help. Greg's a really smart guy. If he's worried, we should all be worried.

Empirical Evidence of the Problem

Contrary to what many may think this is hardly an academic problem. A significant portion of the issues people new to Scala Actors have trace to the fact that they expect the scheduler to be making guarantees that it simply does not make. Here's a sampling:

  1. Actors not creating enough threads
  2. Scala Actors different behavior on JRE 1.5 and 1.6
  3. Actor starvation problem (related to previous)
  4. Actor as new thread
There are others, including cases where people have both rightly and wrongly blamed the default scheduler for serious problems with their programs. Most of the above can be traced back to the fact that the default scheduler for Scala Actors uses a slightly modified version of the JSR166y ForkJoinPool, which has issues described below (source):
Like all lightweight-task frameworks, ForkJoin (FJ) does not explicitly cope with blocked IO: If a worker thread blocks on IO, then (1) it is not available to help process other tasks (2) Other worker threads waiting for the task to complete (i.e., to join it) may run out of work and waste CPU cycles. Neither of these issues completely eliminates potential use, but they do require a lot of explicit care. For example, you could place tasks that may experience sustained blockages in their own small ForkJoinPools. (The Fortress folks do something along these lines mapping fortress "fair" threads onto forkjoin.) You can also dynamically increase worker pool sizes (we have methods to do this) when blockages may occur. All in all though, the reason for the restrictions and advice are that we do not have good automated support for these kinds of cases, and don't yet know of the best practices, or whether it is a good idea at all.

The Scope of the Issue

The issue here is neither limited to Scala nor to actor based concurrency. The general consensus in the concurrency community is that locks and threads are not the right abstractions for concurrency in application code (or hardly any code, for that matter), but there isn't any consensus on what the right abstractions are, or if there is consensus it is that the right abstractions are problem specific. There's no one-size-fits-all concurrency abstraction.

The one trait that all high-level or higher-order concurrency abstractions have in common is that under the hood they rely on some sort of managed thread pool and/or scheduler. For purposes here, when I say "scheduler," I mean the layer with which a concurrency framework interacts directly and likely contains framework-specific logic. When I say "thread pool," I mean the layer that directly manages the threads and is concurrency framework agnostic and may even be shared by several schedulers (sharing will be problematic, but I think ultimately necessary). This isn't a hard line, and often times they may be lumped into one. However I think it's a useful dichotomy. I'm also assuming the scheduler and thread pool rely on cooperative thread sharing, and that preemptive multitasking is left to the virtual machine and/or operating system.

The point is, any concurrency abstraction where the user can submit arbitrary code to be executing can ultimately run into serious issues if that user code does something it does not expect. For example, the fork-join scheduler from JSR-166y that the Scala Actors library uses (I think Clojure uses it as well, but it may not) doesn't automatically enlarge its pool in the presence of unmanaged blocks (including normal blocking IO) or simple long-running computations. This results in the majority of the problems I previously cited, because blocking operations are extremely common tasks, and thus a very leaky abstraction.

Key Characteristics of a Good Scheduler

Unfortunately my command of type theory is rudimentary at best, so while Greg could probably describe a well-behaved scheduler using types, I'm going to have to resort to plain English:

  1. The user should not have to think about choosing a scheduler.
  2. The user should not have to think about the size of the thread pool managed by the scheduler.
  3. The user should be able to write code naturally without worrying about any assumptions the scheduler may make
  4. The user should not have to worry about starting or stopping the scheduler
  5. Multiple flavors of concurrency abstraction should be able to share the same thread pool.
  6. The scheduler should impose minimal overhead.
There's probably others, but I think this is a good start.

The Performance Problem

So why hasn't this problem already been solved? Perhaps it has been already and I just don't know about it. If someone has, please let me know. But as far as I can tell it's because there's a perceived nasty performance trade. Here's a quote from Philipp Haller:

The starvation detection mechanism in Scala 2.7 was too expensive to support in the new default scheduler. However, there is a starvation detection mechanism for thread-blocking methods of the `Actor` object (`receive` etc.).
Similar answers have been provided when people have questioned the wisdom of using the FJ framework over the classes included with the JDK. I've tested it a bit and it's true, the FJ framework does yield a significant improvement for certain classes of tasks over JDK classes (although I believe using exceptions for control flow imposes a far larger penalty). Basically it appears that the overhead associated with known solutions to the problem overcomes the introduced benefits of fine-grained concurrency,

ForkJoinPool in a nutshell

What makes ForkJoinPool so fast (other than highly optimized code) is that uses a two-level architecture for its task queues. There's one queue for the pool, and the one dequeue per thread. Most of the data associated with the dequeues is only updated from the owning thread so that tasks can be added and removed without synchronization and minimal volatile and CAS operations. The worker threads also steal work from one another in order to spread out the load. When a worker is deciding the next task to execute, it performs the following checks, using the first one to return a task:

  1. The thread's own local dequeue
  2. The dequeues of other threads in the pool, scanned in a psuedo-random pattern with guiding hints
  3. The common task queue
Threads are only added to the pool if a ManagedBlocker that the pool can see if used to block causing the target concurrency to not the be met. The default target concurrency for the ForkJoinPool is set to the number of available processors. In Scala Actors this is overridden to be twice the number of available processors. The idea behind this is simple: If you're keeping all your processors busy, then adding more threads will just slow you down. As we've seen, problems arise when a higher level of concurrency is needed due to unmanaged blocks, long running computations, or simply the required concurrency being inherently higher than the default concurrency. This happens because as long as a worker is busy or has something in its local dequeue, it won't look elsewhere for tasks. If all the workers are in this state, tasks can simply accumulate, resulting in starvation.

A Crack at Solving the Problem

As you might have guessed by now, I'm trying to solve this problem. Right now I wouldn't use the code for anything beyond testing the code, but despite the fact I'm still wrestling with the internals of ForkJoinPool I've experienced good results so far. My approach is simple: I added in a manager thread that monitors the changes in size of the various queues in the pool, and if they've been flushed since the last check, and grows the pool it tasks appear to be being starved. The overhead imposed on each worker is minimal, as it only has to update a single additional volatile field when it clears out its queue. The overhead of the manager thread is something, but in the benchmarking I've done so far it doesn't seem to add noticeable overhead. Ironically, Scala Actors already maintain a dedicated thread for the scheduler, its just that the heuristic check on the size of the pool were removed in 2.8 (although honestly they never worked that well).

I have two simple tests/micro-benchmarks using Scala Actors with a custom scheduler built on my ManagedForkJoinPool. The SleepyActor benchmark performs extremely poorly if the scheduler doesn't significantly grow the pool, because each actor sleeps on the thread. The PingPong benchmark spawns a bunch of pairs of actors that simply message each other back-and-forth. It is the type of use case where the ForkJoinScheduler shines, at least in terms of throughput. The results on my MacBook are as follows (these are NOT scientific):

BenchmarkDefault SchedulerManagedForkJoinSchedulerResizeableThreadPoolScheduler
SleepyActor92 sec30 secnot tested
PingPong59.49 sec47.17 sec66.02 sec
As you can see, performance actually improved with my scheduler. This is because the default scheduler for Scala Actors does not always use the "fast path" and put new tasks on the dequeue of the thread creating them. It only does it about half the time. So for the PingPong test you can see how the thread local queues impact performance.

Conclusions

It's too early to draw solid conclusions, but based on what I've done so far I think I can develop a solid heuristic for managing the thread pool size, and that the overhead will be negligible. The key is to not impose any more overhead on the worker threads, and to keep the computational overhead of the manager thread a low as possible. This means that the heuristic needs to be simple, and that the wait times between checks should be a long as possible, probably dynamically sized based on how smoothly the pool seems to be running.

What I need right now are more tests and benchmarks to cover "real world" scenarios. I also need to test on machines with more cores. My MacBook doesn't exactly present a powerful system. If anyone has suggestions, please post them!

Sphere: Related Content

Sunday, August 01, 2010

Is the Market for Technical Workers a Market for Lemons?

I saw this article about Amazon Mechanical Turk this morning, and it struck me that the market for technical talent, and especially in software engineering and IT talent, is a market for lemons. For a little more critical (and hopeful!) look at the idea, take a look at this post at the Mises Institute.

What's a Lemon Market?

The basic idea behind a market for lemons is that the seller has significantly more information about the quality of a product than the buyer (e.g. an information asymmetry). The consequence of this is that buyers are unwilling to pay for more than the perceived average value of the goods on the market, because the buyer does not know whether he is going to get a cherry (a high quality good) or a lemon (a low quality good). This then creates disincentives to sell cherries, because buyers will not pay their market value, and incentives to sell lemons, because they can be sold for more than they are worth. This creates a sort of vicious cycle where high quality goods are withdrawn for the market, thus lowering the average quality of the available goods and consequently the prices buyers are willing to pay. The ultimate result is a race to the bottom in terms of prices, with high quality goods vanishing as collateral damage.

There are five criterion for a lemon market (paraphrased and reordered from Wikipedia):

  1. Asymmetry of information such that buyers cannot accurately assess the quality of goods and sellers can.
  2. There is an incentive for sellers to pass off low quality goods has high quality goods.
  3. There is either a wide continuum in the quality of goods being sold, or the average quality is sufficiently low such that buyers are pessimistic about the quality of goods available.
  4. Sellers have no credible means of disclosing the quality of the goods they offer.
  5. There is a deficiency of available public quality assurance.

The Lemon Market for Technical Talent

The market for technical talent is similar. The information available about most job candidates for technical positions is superficial at best, and completely inaccurate at worst. Furthermore, even when information is available, the buyers often do not have the knowledge required to evaluate it. This even extends existing, long term employees and contractors. Finally, the quality of an employee is often highly contextual - environmental and cultural factors can significantly boost or dampen a person, and those same factors may have the opposite effect on another. So let's rundown the different factors:

  1. Information Asymmetry: Resumes are short, shallow, and often deceptive. Technical skills are very difficult to assess during an interview, and cultural fit can be almost impossible to assess.
  2. Incentives for sellers to deceive buyers: Resumes are deceptive for a reason. It's often stated that you need to pad your resume, because the person looking at it is automatically going to assume it is padded. Furthermore, for almost two decades now technology has been a source of high paying jobs that can be obtained with marginal effort (just turn on the radio and listen for advertisements for schools offering technology training).
  3. Continuum in the quality of goods: This exists in all professions.
  4. Sellers have no credible means of disclosing quality: This is largely but not entirely true. Most paid work that technology professionals do cannot be disclosed in detail, and even when it is there is reason for sellers to doubt the credibility. Employment agreements may also place restrictions of the disclosure of closely related work, even if it is done on one's on time with one's own resources.
  5. Deficiency of public quality assurance: Employment laws and potentials for litigation (at least here in the U.S.) make the disclosure of employee or even outsourcer performance very risky. Individual workers cannot effectively provide guarantees, and those provided by outsourcers are generally meaningless.

All this really amounts to is: Sellers have no good way of providing information regarding their high quality products, and buyers have no good way of obtaining reliable information about a seller's products. The problem centers entirely around making information available and ensuring its accuracy.

What Technology Professionals can do

Technology Professionals need to build up public reputations. We need to make samples of our work, or at least proxies of samples, publicly available. We need to build more professional communities with broader participation and more local, face-to-face engagement.

I think broader participation is the key. If you look at other professions (yes, I know, I'm lumping all "technology professionals" together and I frequently rant against such a lumping), members are much more active in various groups and associations. In many it's expected and even explicitly required by employers. Sure, there are tons of groups on the internet. There are numerous, and often times enormous technology conferences. There are countless technology blogs and community websites. But I think the conferences are generally too big to be meaningful because most attendees essentially end up being passive receptacles for sales pitches (that's the purpose of these conferences) and maybe they do a little networking and learning on the side. I think even the most active community sites really represent very small slices of the professionals they are intended to serve. Most professionals are passive participants in them at best, just like with conferences.

But there are millions of engineers, software developers, and IT professionals out there. Millions! How many of them do you find actively participating in professional communities of any kind? Not many when you really think about it. This is a problem because as long as the vast majority technology professionals have no public, professional identity, the vast majority of employers aren't going to see professional reputation as a useful search criterion or even measure when considering candidates.

What Employers can do

Employers can do one of two things:

  1. Expect applicants for new positions to provide meaningful evidence of their skills and take the time to evaluate such evidence
  2. Just outsource everything to the cheapest company possible. Information on quality is largely unavailable and unreliable, but information on price is readily available and relatively accurate.

You can see which one of those strategies is dominating the marketplace. One requires effort and involves going against the tide. The other is easy (at least to start, it's not easy to make work), and involves running along with the herd.

Except in the limited number of cases were employers, and the hiring managers who work for them, genuinely believe they can achieve a competitive advantage by actively seeking out candidates with demonstrated qualifications (as opposed to a padded resume and fast talk during an interview), I don't think employers are going to seriously consider reputation and objective evidence of skills until such information is easily obtainable and fairly reliable.

Is there any hope?

Yes!

We live and work in an information economy where new forms of information become available everyday. There is no reason to believe just because today employers mostly hire on faith and expect luck to carry them through, the in the future there won't be much more information available. I'm sure there are several companies working on aggregating such information right now. The market is huge. While companies will rarely put much effort into obtaining information and aggregating it into a useful form, they will often pay large quantities of money for it. The key is to make sure the information is there for them to use.

Also, if your resume makes it through all the wickets to a real hiring manager, if you provide an easy way for him to find more good information about you, he'll probably use it. Interviewing people is a lot of work. Deciding among candidates can be very hard. Extra information will almost certainly be used. It just has to be easy to obtain. People are lazy.

But what about old fashioned networking?

I think the numbers involved are too large and relying on old-fashioned networks tends to yield too poor of results. Recommendations and referrals are certainly useful and lead to many, many hires. But they tend to be made more based on personal relationships than based on real qualifications and therefore need to be checked. Schmoozing isn't a real technical skill. That being said, it could quite likely get you a job. It's just that in general I don't want to work with such people in a technical capacity, so I'm not going to recommend people go do it in order to obtain technical jobs. I'm selfish that way.

Sphere: Related Content

Tuesday, July 27, 2010

Higher-Level versus Higher-Order Abstraction

Engineering in general, and software engineering in particular, is all about abstraction. The creation and utilization of abstractions is a core part of the daily activities of every good engineer, and many engineers are on a never ending quest to increase their level of abstraction. Abstraction both increases productivity and increases the complexity of problems that can be tackled. Few would argue that increased abstraction is a bad thing.

But people do argue about abstraction, and often condemn abstractions that they view as unfit or too complex. It is common to hear a software developer praise the merits of one means of abstraction and then demean another in the same breath. Some abstractions are too general. Some aren't general enough. Some are simply too complex or, ironically, too abstract. So while engineers almost universally agree that abstraction is good, and more abstraction is better; they often disagree fiercely about what genuinely constitutes "better."

What is an abstraction?

Simply put, an abstraction is something that hides the details involved in realizing a concept, and thus frees the engineer to focus on the problem at hand rather than the details of some other concern. This can take many, many forms. The simplest and most common, so common that the term is often used to describe almost all "good" abstractions, is higher-level abstractions.

Higher-level Abstractions

Higher-level abstractions are fairly simple: they encapsulate details so that they can be used without knowledge or concern about the details. Let's consider a simple example in Scala 2.8 (you can copy and paste these examples directly into the Scala 2.8 REPL):

scala> def sum(data: Array[Double]): Double = {
     |   var i = 0
     |   var total = 0.0 
     |   while(i < data.length) {
     |     total = total + data(i)
     |     i = i + 1
     |   }
     |   total
     | }
scala> def mean(data: Array[Double]): Double = sum(data) / data.length

So we have two functions: one that computes the sum of an array of doubles, and one that computes the mean. Pretty much any programmer (professional or otherwise) would feel confident both writing and using such functions. They are written in a modern multi-paradigm programming language yet I bet if you went back in time and showed them to programmers in some of the very first high-level languages they would recognize exactly what they do. They clearly encapsulate the details of performing certain computations. But what's more interesting about them is what's missing from them:

  1. Polymorphism
  2. Closures or first-class functions
  3. Usage or creation of higher-order functions
  4. Other Scala magic such as type classes and implicit parameters

In other words, they provide a simple, layered, hierarchical abstraction in a very bounded way. If we step away from software for a minute, you can imagine a digital designer placing a symbol for an adder or a register on a schematic without worrying about the arrangement of transistors that will be required to realize them in an actual circuit, or a mechanical engineer selecting a standard screw or clamp. These are parts that can be simply built, tested, and composed into larger devices.

Higher-Order Abstraction Take 1: The Mathematics of Fruit

Imagine I have some apples. But unfortunately I'm here, off in the internet, show I can't show them to you. I need an abstraction to tell you about them. For example, I could say I have two apples. Two is a number, and numbers are abstractions. I can use the same number two to describe the number of McIntoshes in my refrigerator or the number of Apple Macintoshes I own. Now let's say I also want to talk about my strawberries and blueberries. I have 16 strawberries and 100 blueberries. How many pieces of fruit do I have? 118! How did I figure that out? I used arithmetic, which is an abstraction of higher order than numbers. How let's say I want to know how many days it will be before I will have eaten all my strawberries. I can write an equation such as: current_fruit - pieces_per_day * number_of_days = fruit_after_number_of_days. I can do even more with this by solving for different variables. This is algebra, and it is a higher-order abstraction than arithmetic. Now let's say I want to build upon that so that I can study the dynamics of the amount of fruit in my refrigerator. I purchase fruit and put it in my refrigerator. I take fruit out and eat it. I forget about fruit and it gets moldy, the I remove it and throw it away. I can capture all of these as a system of differential equations, and using calculus describe all sorts of things about my fruit at any given point in time. Calculus is a higher-order abstraction that algebra. In fact, abstractions similar to the ones built with calculus are what I mean when I say "higher-order abstraction."

At each step up the chain of mathematics both the generality and number of concepts that can be conveyed by a single abstraction increased. In the case of calculus it becomes essentially infinite, and that's the essence of higher-order abstractions: they deal with the infinite or near-infinite. Also, observe that almost everyone from relatively small children on up understand numbers and arithmetic, most adolescents and adults can stumble through applying algebra, and only a very small portion of the population knows anything about calculus, much less can effectively apply it. Also observe that the functions I defined earlier are just below algebra in terms of their order of abstraction.

Higher-Order Abstraction Take 2: Programming in Scala

Let's see if we can sprinkle some higher-order abstraction into our previously defined functions:

scala> def sum(data: Traversable[Double]) = data.foldLeft(0.0)(_ + _)

This definition of sum exposes one higher-order abstraction (polymorphism - it now can use any Traversable[Double] instead of just an Array[Double]), and it uses higher-order functions in its implementation to perform the summation. This new definition is both much shorter and much more general, but it's still relatively approachable, especially for use. Calling it with an Array[Double] works as before, and now it can be used with any number of collections, so long as the collections contain doubles. But forcing the collections to contain doubles is very limiting, so let's see if we can do better:

scala> def sum[T](data: Traversable[T])(implicit n: Numeric[T]): T = {                                  
     |   import n._
     |   data.foldLeft(zero)(_ + _)
     | }

Ahhh, that's better! Much more general. Not only will this work for any numeric type defined in the Scala standard library, but for any numeric type for which the Numeric type class has been defined! It doesn't even need to be in the same class hierarchy! In order to introduce this new level of generality, we've also introduced the following higher-order abstractions:

  1. Classic polymorphism (Traversable instead of Array)
  2. Parametric polymorphism (the type parameter T for various classes)
  3. Higher-order functions and closures (foldLeft and it's argument that does addition)
  4. Type classes (well, Scala's flavor of type classes, e.g. Numeric)

Now, this is the point where people start screaming that abstraction has gone too far. Many professional programmers would look at it and think "WTF?" They could still guess what it does, but the mechanics are quite elusive for anyone that doesn't know Scala reasonably well. That being said, the code is still far more compact than the original imperative version and is extremely general (to which someone replies "Yes, but it's slow a heck compared to the original!"). At this point I would say the order of abstraction has went from being just below algebra to being on par with integral calculus. Just like with mathematics, we see a significant drop off in the number of people who readily understand it.

Higher-Order Abstraction Take 3: Conclusions

Let's consider a short, incomplete list of higher-order abstractions, means of abstraction, and fields that rely upon higher-order abstractions:

  1. Higher-order functions
  2. Polymorphism and parametric polymorphism
  3. Metaclasses such as in Python and Smalltalk
  4. Rich macros such as in Lisp or C++ Templates
  5. Calculus and all the mathematics built upon it (and various other forms of advanced math)
  6. First-order Logic and its kin, and all the higher-order logics
  7. Any physics or other sciences that are built upon calculus
  8. Any engineering or other applications that are built upon physics

Higher-order abstractions tend to exhibit one or more of the following traits:

  1. They deal in infinities (e.g. integration and differentiation in calculus, universal and existential quantification in first-order logic, polymorphism and type classes in programming)
  2. They deal in transformations (e.g. integration and differentiation in calculus, metaclasses and macros in programming)
  3. They rely upon composition rules (e.g. chain rule and integration by parts in calculus, higher-order functions in programming)
  4. Learning to use any given form of higher-order abstraction requires significant effort and discipline, therefore they tend to be the tools of specialists
  5. They form the foundation of our modern, technological society.

The last two can be converted into conclusions:

  1. Higher-order abstractions are critical for most interesting forms of science and technology
  2. You can't expect a colleague or customer from another field to understand the higher-order abstractions in your field, even if his own field is totally immersed in higher-order abstraction

Sphere: Related Content

Saturday, July 24, 2010

Windows 7 Upgrade (so far)

A few weeks ago my wife's Lenovo laptop caught a really nasty virus. Symantec antivirus couldn't clean it up. But I had recovery media (although it was Vista instead of XP), so I used Knoppix to low-level the HD and flashed an updated BIOS (I've had previous experience with viruses that could be stopped via nothing less). Unfortunately the restore media didn't work...it would boot up, think a while, and the reboot. I also tried a XP install using an old XP disk I had laying around, but the XP installer doesn't get along with the new SATA controller in the laptop, so it BSODs. Working around this problem seems to require creating a special driver disk, which seems to require Windows, which creates a problem because all I have are Leopard (about 2 crashes in almost 3 years, a no viruses, versus what seems like a plague on my wife's laptop every 6 months), and Knoppix. So, being a person who value's his time more than his money, I went out and bought a Windows 7 Professional upgrade. All the Microsoft related blogs assured me this would work, and indeed I have a working install the went almost a smooth as butter. The "almost" is the activation process. I have yet to successfully activate. Going into this my assumption was I would have to call Microsoft in order to activate. What I didn't realize is that their human systems don't seem any smarter than their automated systems. So first I called the phone number that Microsoft's support website told me to call. Several layers deep this told me to call a special activation line. I want to commend Microsoft at this point for avoiding such advanced technology as call transfers. You really can't trust technology that's decades old. It's better to let people transfer themselves. So I call the line, and speak to a very friendly automated system. It's asking me for my "installation id," but there isn't one. There isn't one because the computer is connected to the internet, and some genius decided that if one has the internet one would never use the phone system, which would normally be true if some brilliant licensing policy maker hadn't decided that the only way to activate a "clean install" is via the phone system. Anyway, the nice automated system transfers me to a human being. Guess what the human asks me for? An installation id. I tell him I don't have one. I read the various things on my screen to him. He then asks if I am connected to the internet. I say yes. He tells me to disconnect. I disconnect and start over, ultimately ending up in the same place with no installation ID. The support person gives up and tells me I have a deep technical problem, which will require technical support, not activation support (I'm now imaging myself setting up a conference call with tech support, activation support, and the nice automated system lady). Tech support is of course only a US working hours affair, not a Saturday evening affair. It turns out the automated support guy just wasn't persistent enough. After hanging up, I reboot the computer and try again without internet. It appears to get me to the same place, except when I hit the magic "next" button, instead of receiving a message telling me that my product key is not valid, I receive an installation ID and am told to call the activation line (which is a 3rd phone number, but gets me to the same place as the first). I want to take a moment to point out some huge flaws here:

  1. The automated system just assumed I would have an installation ID, and had no explanation of how I was supposed to obtain one or why I might not see one, while the activation software is explicitly designed to not provide one to someone connected to the internet
  2. The human being had a vague idea that being connected to the internet was a problem, although he didn't realize it until repeating his requests half a dozen times, and even though he had a vague idea how to induce Windows 7 into providing one, he didn't actually know and gave up quickly
  3. About 2 minutes would of poking after 30 minutes of bouncing around on the phone rewarded me with an installation id
Ok, so let's try again. This time I provide the installation id to the automated system, and the automated system informs me that it is invalid, and sends me to another human being. After 5 or 10 minutes the human being informs me that I need to have a Microsoft OS already installed in order to use an upgrade, and that I cannot use a clean install. He says it is impossible that Windows XP would refuse to install, and thinks it's completely reasonable that I would install one OS just to install another. I rant at him for a few minutes about how my product packaging says nothing about not being able to do a clean install (in fact, it's the only way to upgrade from XP according to the included instructions, although maybe it does some magic if the installer reformats the HD instead of the user already doing it). Anyway, I explain this all to him while trying to remain calm. He puts me on hold. Then when he comes back he tells me that he gave me wrong information because a server is down, and that I need to call back in 30 minutes when the server is up. So I did that. This time after I read my installation ID to the support person, which he told me was invalid. He told me to exit activation and enter my product key again. I dutifully did this, then he put me on hold while he waited for my product key to come up. This of course would never happen, because I wasn't connected to the internet, and it fact cannot obtain an installation id when connected to the internet. After a few minutes on hold, I was transfered to Microsoft technical support, which of course was closed. So let's recap so far:
  1. None of the Microsoft activation people or systems know what you need to do in order to obtain an installation id, all of them expect you to just have one
  2. In order to obtain an installation id, as far as I can tell you can't be connected to the internet (certain options only appear if you are not connected)
  3. Some support people seem to assume that you are connected to the internet, even though the information they request will never come up if you are not connected to the internet
  4. Support people will transfer you to a line that will not answer without even telling you that you are being transfered
  5. That line seems to only be open during US business hours, which would mean they seem to assume that either you professionally support Microsoft products or are unemployed.
If I can't make this work tomorrow, I think I'm going to obtain an updated version of MS Office for my Mac and give it to my wife, then just install Ubuntu on her laptop.

Sphere: Related Content

Saturday, January 16, 2010

Changing Tastes

When I was a kid, I hated onions, green peppers, and mushrooms. I used to tell people I was allergic to mushrooms so they wouldn't try to make me eat them. I hated any sort of chunky sauce or really textured meat. I think I wanted everything to have either the consistency of a chicken nugget or ketchup. My parents used to tell me that when I was older my tastes would change. That I liked crappy food, disliked good food, and eventually I would realize it. They were right.

So kids like chicken nuggets and ketchup. Wow, huge revelation. What does this have to do with technology? I'm on the steering committee for an internal conference on software engineering that my employer is holding. I'm the most junior person on the committee, and most of the members are managers who have more managers reporting to them. Our technical program committee (separate, more technical people on it, but all very senior) just finished abstract selection and we've been discussing topics and candidates for a panel discussion. During this process I have been absolutely shocked by how my tastes differ from those of my colleagues.

I've noticed that within the presentations selected topics on process, architecture, and management are over-represented. On the other side, many of the abstracts that I thought were very good and deeply technical fell below the line. I can't quite say there was a bias towards "high level" topics, because I think they were over-represented in the submissions. Given the diversity of technology that's almost inevitable. A guy doing embedded signal processing and a guy doing enterprise systems would most likely submit very different technical abstracts, but ones on management or process could be identical. It's almost inevitable that topics that are common across specialties will have more submissions.

There's a similar story with the panel discussion. I wanted a narrower technical topic, preferably one that is a little controversial so panelists and maybe even the audience can engage in debate. My colleagues were more concerned with who is going to be on the panel than what they would talk about, and keeping the topic broad enough to give the panelists freedom.

What's clear is that my colleagues clearly have different tastes in presentation content than me. I think they are genuinely trying to compose the best conference they can, and using their own experiences and preferences as a guide. I think their choices have been reasonable and well intentioned. I just disagree with many of them. If I had been the TPC chair, I would have explicitly biased the selection criteria towards deeper, technical topics. Those are the topics I would attend, even if they are outside my area of expertise. I would use my preferences as a guide. But that leaves me wondering, in another five years or ten years are my tastes going change? My tastes have certainly changed over the past decade, so I have no reason to believe they won't change over the next. Will I prefer process over technology and architecture over implementation? Will I stop thinking "show me the code!" and "show me the scalability benchmarks!" when see a bunch of boxes with lines between them? I don't think so, but only time will tell. When I was a kid, I would have never believed that I would ever willingly eat raw fish, much less enjoy it, but today I do.


Sphere: Related Content

Tuesday, January 05, 2010

Your Company's App

Tim Bray just posted a blog about how Enterprise IT is doing it wrong.  I can't really argue with that. He goes on to explain that Enterprise IT needs to learn from those companies building Web 2.0, because they deliver more functionality in less time and for a whole lot less money. This is where his argument breaks down. The problem is the type of Enterprise Systems he's talking about aren't Twitter, they're your company's app.

I work at a pretty big, stodgy, conservative company. I'm pretty sure, as far as things like ERP and PLM are concerned, my employer is exactly the type of company Tim is talking about, and like I said - he's probably right about the problem. But based on my own experiences and observations at my one lonely company, I think he's wrong in his suggestions. Comparing ERP and Facebook is like comparing apples and Apple Jacks.

The reason I think this is that in terms of deploying Enterprise 2.0 applications I think my employer has done fairly well. We have...

...and probably more stuff that I'm not aware of yet. Some of the above were bought, some were built, some were cobbled together with stuff from a variety of sources. I think most of them were built and deployed, at least initially, for costs about as competitive as possible with Web 2.0 startups and in reasonable timeframes. Of course, this means intranet scale at a Fortune 100 company (or in some cases a subset of it), not successful internet scale.

The problems the above face is much like the problem your typical startup faces: attracting users, keeping investors (sponsors) happy, dealing with the occasional onslaught of visitors after some publicity. But these problems are very different from the problems a traditional Enterprise Application faces. There is SOX compliance. There are no entrenched stakeholders. There are no legacy systems, or if there are they aren't that important. If only 10% of the potential user community actually uses the application, it's a glowing success. Negligible adoption can be accepted for extended periods while the culture adjusts, because the carrying cost of such applications is low.

But enterprise systems needs to deal with SOX. They have more entrenched stakeholders than you can count. These days there's always at least one legacy system, and often several due to disparate business units and acquisitions. If these systems fail, business stops, people don't get paid, or the financials are wrong. If only 90% of your buyers use your ERP systems to process purchase orders, it's an abject failure and possibly endangering the company.

A year or two ago someone "discovered" that a very common, important record in one of our internal systems had 44 (or something like that) required fields, and decided that this was ridiculous. A team was formed to streamline the processes associated with this record by reducing the number of fields. A detailed process audit was conducted. It turned out that every single one of them played a critical role in a downstream process. All the fields remained, and some old timers smiled knowingly.

As many humorless commenters pointed out on Eric Burke's blog, your company's app is your company's app for a reason, and often it's a good reason. These applications aren't complicated because of the technology. Internet applications are complex due to extrinsic forces - the sheer number of users and quantity of data that can deluge the application at any given moment. Enterprise systems tend to be the opposite. Their complexity is intrinsic due to the complexity of the diverse processes the support. The technology complexity (most of which is accidental) is secondary. Tim's suggestions provide a means of addressing technological complexity, and of building green field non-business critical applications in the face of uncertain requirements. They don't provide a means for dealing with critical systems laden with stakeholders, politics, and legacy.

I think the solution, or at least part of the solution, to the problem with enterprise systems lies in removing much of the functionality from them. There are things that must be right all of them time, but most of business (and engineering, etc) exists on a much fuzzier plane. The problem comes from coupling the precise (e.g. general ledger) with the imprecise (e.g. CM on an early stage design), and thus subjecting the imprecise to overly burdensome controls and restrictions. Only after this separation has been carefully implemented can functionality evolve in an agile fashion.

Sphere: Related Content