Showing posts with label management. Show all posts
Showing posts with label management. Show all posts

Wednesday, January 12, 2011

Types of Project Failure

I saw this question on StackExchange and it made me think about all the projects I've been on that have failed in various ways, and at times been both declared a success and in my opinion been a failure at the same time (this happens to me a lot). Michael Krigsman posted a list of six different types a while back, but they are somewhat IT centric and are more root causes than types (not that identifying root causes isn't a worthwhile exercise - it is). In fact, when if you Google for "types of project failures," the hits in general are IT centric. This may not seem surprising, but I think the topic deserves more general treatment. I'm going to focus on product development and/or deployment type projects, although I suspect much can be generalized to other areas.

Here's my list:

  1. Failure to meet one or more readily measurable objectives (cost, schedule, testable requirements, etc)
  2. Failure to deliver value consummate with the cost of the project
  3. Unnecessary collateral damage done to individuals or business relationships

I think these are important because in my experience they are the types of failure that are traded "under the table." Most good project managers will watch their measurable objectives very closely, and wield them as a weapon as failure approaches. They can do this because objective measures rarely sufficiently reflect "true value" to the sponsor. They simply can't, because value is very hard to quantify, and may take months, years, or even decades to measure. By focusing on the objective measures, the PM can give the sponsor the opportunity to choose how he wants to fail (or "redefine success," depending on how you see it) without excessive blame falling of the project team.

Subjective Objective Failure

The objective measures of failure are what receive the most attention, because, well, they're tangible and people can actually act upon them. My definition of failure here is probably a bit looser than normal. I believe that every project involves an element of risk. If you allocate the budget and schedule for a project such that is has a 50% chance of being met - which may be a perfectly reasonable thing to do - and the project comes in over budget but within a reasonably expected margin relative to the uncertainty, then I think the project is still a success. The same goes for requirements. There's going to be some churn, because no set of requirements is ever complete. There's going to be some unmet requirements. There are always some requirements that don't make sense, are completely extraneous, or are aggressive beyond what is genuinely expected. The customer may harp on some unmet requirement, and the PM may counter with some scope increase that was delivered, but ultimately one can tell if stakeholders feel a product meets its requirements or not. It's a subjective measure, but it's a relatively direct derivation from objective measures.

Value Failure

Failure to deliver sufficient value is usually what the customer really cares about. One common situation I've seen is where requirements critical to the system's ConOps, especially untestable ones or ones that are identified late in the game, are abandoned in the name of meeting more tangible measures. In a more rational world such projects would either be canceled or have success redefined to include sufficient resources to meet the ConOps. But the world is not rational. Sometimes the resources can't be increased because they are not available, but the sponsor or other team members chooses to be optimistic that they will become available soon enough to preserve the system's value proposition, and thus the team should soldier on. This isn't just budget and schedule - I think one of the more common problems I've seen is allocation of critical personnel. Other times it would be politically inconvenient to terminate a project due to loss of face, necessity of pointing out an inconvenient truth about some en vogue technology or process (e.g. one advocated by a politically influential group or person), or idle workers who cost just as much when they're idle as when they're doing something that might have value, or just a general obsession with sunk costs.

This is where the "value consummate with the cost" comes in. Every project has an opportunity cost in addition to a direct cost. Why won't critical personnel be reallocated even though there's sufficient budget? Because they're working on something more important. A naive definition of value failure would be a negative ROI, or an ROI below the cost of capital, but it's often much more complicated.

It's also important to note here that I'm not talking about promised ROI or other promised operational benefits. Often times projects are sold on the premise that they will yield outlandish gains. Sometimes people believe them. Often times they don't. But regardless, and usually perfectly possible to deliver significant value without meeting these promises. This would be a case of accepting objective failure because the value proposition is still there, it's just not as sweet as it was once believed to be.

Collateral Damage

Collateral damage receives a fair amount of press. We've all heard of, and most of us have one time or another worked, a Death March project. In fact, when I first thought about failure due to collateral damage, I thought any project causing significant collateral damage would also fall under at least one of the other two categories. It would be more a consequence of the others than a type unto itself. But then I thought about it, and realized experienced a couple projects where I feel the damage done to people was unacceptable, despite the fact that in terms of traditional measures and value delivered they were successful. There's a couple ways this can happen. One is that there are some intermediate setbacks from which the project ultimately recovers, but one or more people are made into scapegoats. Another way arises from interactions among team members that are allowed to grow toxic but never reach the point where they boil over. In my experience a common cause is an "extreme" performer, either someone who is really good or really bad, with a very difficult personality, particularly when the two are combined on a project with weak management.

Now, you might be wondering what the necessary causes of collateral damage are. It's actually fairly simple. Consider a large project that involves a large subcontract. The project may honestly discover that the value contributed by the subcontract does not justify its cost, or that the deliverables of the subcontract are entirely unnecessary. In this case the subcontract should be terminated, which in turn is likely to lead to a souring of the business relationship between the contractor and the subcontractor, along with potentially significant layoffs at the subcontractor. Often times a business must take action, and there will be people or other businesses who lose out through no fault of their own.

Back to Trading Among Failure Types

A Project Manager, along with projects sponsors and other key stakeholders, may actively choose which type of failure, or balance among them, is most desirable. Sometimes this "failure" can even really be success, so long as significant value is delivered. Some of the possible trades include:

  1. Failing to meet budget and schedule in order to ensure value.
  2. Sacrificing value in order to meet budget and schedule...
  3. ...potentially to avoid the collateral damage that would be inflicted in the case of an overt failure
  4. Inflicting collateral damage through practices such as continuous overtime in order to ensure value or completing on target
  5. Accepting reduced value or increased budget/schedule in order to avoid the collateral damage of the political fallout for not using a favored technology or process

Ultimately some of these trades are inevitable. Personally, I strongly prefer focusing on value. Do what it takes while the value proposition remains solid and the project doesn't resemble a money pit. Kill it when the value proposition disappears or it clearly becomes an infinite resource sink. But of course I know this is rather naive, and sometimes preventing political fallout, which I tend to willfully ignore in my question for truth, justice, and working technology; has an unacceptable if intangible impact. One of my most distinct memories is having a long conversation with a middle manager about a runaway project, and at the end being told something like "Erik, I agree with you. I know you're right. We're wasting time and money. But the money isn't worth the political mess it would create if I cut it off or forcefully reorganized the project." I appreciated the honesty, but it completely floored me, because it meant not only that politics could trump money, but politics could trump my time, which was being wasted. Now I see the wisdom in it, and simply try to avoid such traps when I see them and escape them as quickly as possible when I trip over them.

Sphere: Related Content

Sunday, August 01, 2010

Is the Market for Technical Workers a Market for Lemons?

I saw this article about Amazon Mechanical Turk this morning, and it struck me that the market for technical talent, and especially in software engineering and IT talent, is a market for lemons. For a little more critical (and hopeful!) look at the idea, take a look at this post at the Mises Institute.

What's a Lemon Market?

The basic idea behind a market for lemons is that the seller has significantly more information about the quality of a product than the buyer (e.g. an information asymmetry). The consequence of this is that buyers are unwilling to pay for more than the perceived average value of the goods on the market, because the buyer does not know whether he is going to get a cherry (a high quality good) or a lemon (a low quality good). This then creates disincentives to sell cherries, because buyers will not pay their market value, and incentives to sell lemons, because they can be sold for more than they are worth. This creates a sort of vicious cycle where high quality goods are withdrawn for the market, thus lowering the average quality of the available goods and consequently the prices buyers are willing to pay. The ultimate result is a race to the bottom in terms of prices, with high quality goods vanishing as collateral damage.

There are five criterion for a lemon market (paraphrased and reordered from Wikipedia):

  1. Asymmetry of information such that buyers cannot accurately assess the quality of goods and sellers can.
  2. There is an incentive for sellers to pass off low quality goods has high quality goods.
  3. There is either a wide continuum in the quality of goods being sold, or the average quality is sufficiently low such that buyers are pessimistic about the quality of goods available.
  4. Sellers have no credible means of disclosing the quality of the goods they offer.
  5. There is a deficiency of available public quality assurance.

The Lemon Market for Technical Talent

The market for technical talent is similar. The information available about most job candidates for technical positions is superficial at best, and completely inaccurate at worst. Furthermore, even when information is available, the buyers often do not have the knowledge required to evaluate it. This even extends existing, long term employees and contractors. Finally, the quality of an employee is often highly contextual - environmental and cultural factors can significantly boost or dampen a person, and those same factors may have the opposite effect on another. So let's rundown the different factors:

  1. Information Asymmetry: Resumes are short, shallow, and often deceptive. Technical skills are very difficult to assess during an interview, and cultural fit can be almost impossible to assess.
  2. Incentives for sellers to deceive buyers: Resumes are deceptive for a reason. It's often stated that you need to pad your resume, because the person looking at it is automatically going to assume it is padded. Furthermore, for almost two decades now technology has been a source of high paying jobs that can be obtained with marginal effort (just turn on the radio and listen for advertisements for schools offering technology training).
  3. Continuum in the quality of goods: This exists in all professions.
  4. Sellers have no credible means of disclosing quality: This is largely but not entirely true. Most paid work that technology professionals do cannot be disclosed in detail, and even when it is there is reason for sellers to doubt the credibility. Employment agreements may also place restrictions of the disclosure of closely related work, even if it is done on one's on time with one's own resources.
  5. Deficiency of public quality assurance: Employment laws and potentials for litigation (at least here in the U.S.) make the disclosure of employee or even outsourcer performance very risky. Individual workers cannot effectively provide guarantees, and those provided by outsourcers are generally meaningless.

All this really amounts to is: Sellers have no good way of providing information regarding their high quality products, and buyers have no good way of obtaining reliable information about a seller's products. The problem centers entirely around making information available and ensuring its accuracy.

What Technology Professionals can do

Technology Professionals need to build up public reputations. We need to make samples of our work, or at least proxies of samples, publicly available. We need to build more professional communities with broader participation and more local, face-to-face engagement.

I think broader participation is the key. If you look at other professions (yes, I know, I'm lumping all "technology professionals" together and I frequently rant against such a lumping), members are much more active in various groups and associations. In many it's expected and even explicitly required by employers. Sure, there are tons of groups on the internet. There are numerous, and often times enormous technology conferences. There are countless technology blogs and community websites. But I think the conferences are generally too big to be meaningful because most attendees essentially end up being passive receptacles for sales pitches (that's the purpose of these conferences) and maybe they do a little networking and learning on the side. I think even the most active community sites really represent very small slices of the professionals they are intended to serve. Most professionals are passive participants in them at best, just like with conferences.

But there are millions of engineers, software developers, and IT professionals out there. Millions! How many of them do you find actively participating in professional communities of any kind? Not many when you really think about it. This is a problem because as long as the vast majority technology professionals have no public, professional identity, the vast majority of employers aren't going to see professional reputation as a useful search criterion or even measure when considering candidates.

What Employers can do

Employers can do one of two things:

  1. Expect applicants for new positions to provide meaningful evidence of their skills and take the time to evaluate such evidence
  2. Just outsource everything to the cheapest company possible. Information on quality is largely unavailable and unreliable, but information on price is readily available and relatively accurate.

You can see which one of those strategies is dominating the marketplace. One requires effort and involves going against the tide. The other is easy (at least to start, it's not easy to make work), and involves running along with the herd.

Except in the limited number of cases were employers, and the hiring managers who work for them, genuinely believe they can achieve a competitive advantage by actively seeking out candidates with demonstrated qualifications (as opposed to a padded resume and fast talk during an interview), I don't think employers are going to seriously consider reputation and objective evidence of skills until such information is easily obtainable and fairly reliable.

Is there any hope?

Yes!

We live and work in an information economy where new forms of information become available everyday. There is no reason to believe just because today employers mostly hire on faith and expect luck to carry them through, the in the future there won't be much more information available. I'm sure there are several companies working on aggregating such information right now. The market is huge. While companies will rarely put much effort into obtaining information and aggregating it into a useful form, they will often pay large quantities of money for it. The key is to make sure the information is there for them to use.

Also, if your resume makes it through all the wickets to a real hiring manager, if you provide an easy way for him to find more good information about you, he'll probably use it. Interviewing people is a lot of work. Deciding among candidates can be very hard. Extra information will almost certainly be used. It just has to be easy to obtain. People are lazy.

But what about old fashioned networking?

I think the numbers involved are too large and relying on old-fashioned networks tends to yield too poor of results. Recommendations and referrals are certainly useful and lead to many, many hires. But they tend to be made more based on personal relationships than based on real qualifications and therefore need to be checked. Schmoozing isn't a real technical skill. That being said, it could quite likely get you a job. It's just that in general I don't want to work with such people in a technical capacity, so I'm not going to recommend people go do it in order to obtain technical jobs. I'm selfish that way.

Sphere: Related Content

Saturday, January 16, 2010

Changing Tastes

When I was a kid, I hated onions, green peppers, and mushrooms. I used to tell people I was allergic to mushrooms so they wouldn't try to make me eat them. I hated any sort of chunky sauce or really textured meat. I think I wanted everything to have either the consistency of a chicken nugget or ketchup. My parents used to tell me that when I was older my tastes would change. That I liked crappy food, disliked good food, and eventually I would realize it. They were right.

So kids like chicken nuggets and ketchup. Wow, huge revelation. What does this have to do with technology? I'm on the steering committee for an internal conference on software engineering that my employer is holding. I'm the most junior person on the committee, and most of the members are managers who have more managers reporting to them. Our technical program committee (separate, more technical people on it, but all very senior) just finished abstract selection and we've been discussing topics and candidates for a panel discussion. During this process I have been absolutely shocked by how my tastes differ from those of my colleagues.

I've noticed that within the presentations selected topics on process, architecture, and management are over-represented. On the other side, many of the abstracts that I thought were very good and deeply technical fell below the line. I can't quite say there was a bias towards "high level" topics, because I think they were over-represented in the submissions. Given the diversity of technology that's almost inevitable. A guy doing embedded signal processing and a guy doing enterprise systems would most likely submit very different technical abstracts, but ones on management or process could be identical. It's almost inevitable that topics that are common across specialties will have more submissions.

There's a similar story with the panel discussion. I wanted a narrower technical topic, preferably one that is a little controversial so panelists and maybe even the audience can engage in debate. My colleagues were more concerned with who is going to be on the panel than what they would talk about, and keeping the topic broad enough to give the panelists freedom.

What's clear is that my colleagues clearly have different tastes in presentation content than me. I think they are genuinely trying to compose the best conference they can, and using their own experiences and preferences as a guide. I think their choices have been reasonable and well intentioned. I just disagree with many of them. If I had been the TPC chair, I would have explicitly biased the selection criteria towards deeper, technical topics. Those are the topics I would attend, even if they are outside my area of expertise. I would use my preferences as a guide. But that leaves me wondering, in another five years or ten years are my tastes going change? My tastes have certainly changed over the past decade, so I have no reason to believe they won't change over the next. Will I prefer process over technology and architecture over implementation? Will I stop thinking "show me the code!" and "show me the scalability benchmarks!" when see a bunch of boxes with lines between them? I don't think so, but only time will tell. When I was a kid, I would have never believed that I would ever willingly eat raw fish, much less enjoy it, but today I do.


Sphere: Related Content

Tuesday, April 21, 2009

McKinsey and Cloud Computing

McKinsey has created a tempest-in-a-teapot by denouncing the economics behind both in-the-cloud-cloud such as Amazon E2C and behind-the-firewall clouds for large enterprises.  At a high level I think their analysis is actually pretty good, but the conclusions misleading due to a semantic twist.  They use Amazon E2C as a model, and their conclusions go something like this:

  1. Amazon E2C virtual CPU cycles are more expensive than real, in-house CPU cycles
  2. You waste 90% of those in-house CPU cycles
  3. You'll waste almost as many of those virtual cloud CPU cycles, only they cost more, so they are a bad deal
  4. You stand a decent shot at saving some of those real CPU cycles through virtualization, so you should aggressively virtualize your datacenter
  5. You're too inept to deliver a flexible cloud behind-the-firewall, so don't even try

I'll let you ponder which of the above statements is misleading while I address some related topics.

The goals of cloud computing are as old as computing itself.  They are:

  1. Reduce the time it takes to deploy a new application
  2. Reduce the marginal cost of deploying a new application over "standard" methods
  3. Reduce the marginal increase to recurring costs caused by deploying a new application over "standard" methods

Back in the days of yore, when programmers were real men, the solution to this was time sharing.  Computers were expensive and therefore should be run at as a high of utilization as possible.  While making people stand in line a wait to run their batch jobs was a pleasing ego trip for the data center operators, the machines still wasted CPU time while performing slow I/O operations and waiting in line generally made users unhappy.  Thus time sharing was born, and in a quite real sense the first cloud computing environments, because in many cases a large institution would purchase and host the infrastructure and then lease it out of smaller institutions or individuals.

The problem here is that the marginal cost equations end up looking like a stair-step function.  If you had a new application, and your enterprise / institution had excess mainframe capacity, then the marginal cost of letting you run your application was near zero.  But if there was no spare capacity - meaning the mainframe was being efficiently utilized - then the marginal cost was high because either someone else had to be booted off or you needed an additional mainframe.

Now fast-forward a couple decades to the PC revolution.  Somewhere along the way the cost curves for computers and people crossed, so it became appropriate to let the computer sit idle waiting for input from a user rather than having a user sit idle while waiting for a computer.  Now you could have lots of computers with lots of applications running on each one (although initially it was one application at at time, but still, the computer could run any number of them).  This smoothed out the non-recurring marginal cost curve, but as PCs proliferated it drove up recurring costs through sheer volume.

Unfortunately this had problems.  Many applications didn't work well without centralized backends, and some users still needed more compute power than could be reasonably mustered on the desktop.  So the new PCs were connected to mainframes, minicomputers, and eventually servers.  Thus client-server computing was born, along with increasingly confusing IT economics.  PCs were cheap, and constantly becoming cheaper, but backend hardware remained expensive.  The marginal non-recurring cost becomes completely dependent on the nature of the application, and recurring costs simply begin to climb with no end in sight.

Now fast forward a little more.  Microsoft releases a "server" operating system that runs on suped up PCs an convinces a whole bunch of bean counters that they can solve their remaining marginal non-recurring cost problems with Wintel servers that don't cost much more than PCs.  Now more expensive servers.  No more having to divide the cost of a single piece of hardware across several project.  Now if you want to add an application you can just add an inexpensive new Wintel server.  By this time the recurring cost equation had already become a jumbled mess, and the number of servers was still dwarfed by the PC on every desk, so there no tying back the ever increasing recurring costs.  This problem was then further exacerbated by Linux giving the Unix holdouts access to the same cheap hardware.

Thus began the era of one or more physical servers per application, which is where we are today, with McKinsey's suggestion for addressing: virtualization behind the firewall.  The problem with this suggestion is that, for a large enterprise, it isn't really that different from the cloud-in-the-cloud solution that they denounce as uneconomical.  One way is outsourcing a virtualized infrastructure to Amazon or similar, and the other is outsourcing it to their existing IT provider (ok, not all large enterprises outsource their IT, but a whole lot do).

Virtualization, in the cloud or otherwise, isn't the solution because it doesn't address the root cause of the problem - proliferation of (virtual) servers and the various pieces of infrastructure software that run on them, such as web servers and databases.  Hardware is cheap.  Software is often expensive.  System administrators are always expensive.  Virtualization attacks the most minor portion of the equation.

Virtualization is the right concept applied to the wrong level of the application stack.  Applications need to be protected from one another, but if they are built in anything resembling a reasonable way (that's a big caveat, because many aren't) then they don't need the full protections of running in a separate OS instance.  There's even a long standing commercially viable market for such a thing: shared web hosting.

It may not be very enterprisey, but shared web site/application hosting can easily be had for about $5 per month.  The cost quickly goes up as you add capabilities, but still - companies are making money by charging arbitrary people $5 per month to let them run arbitrary code on servers shared by countless other customers running arbitrary code.  How many enterprise IT organizations can offer a similar service at even an order-of-magnitude greater cost?

Not many, if any.  Yet do we see suggestions pointing out that Apache, IIS, Oracle, SQL Server, and countless other pieces of infrastructure can relatively easily be configured to let several applications share compute resources and expensive software licenses?  Nope.  They suggest you take your current mess, and virtualize it behind the firewall instead of virtualizing it outside the firewall.

Sphere: Related Content

Monday, June 23, 2008

Google AppEngine

There's a lot of buzz out there about how Google AppEngine is a game changer. My question is: Which game? AppEngine reduces the cost of hosting a web application down to zero. Zero is a pretty profound number. You don't even need to provide a credit card number in case they might want to charge you some day. All you need to a mobile phone number capable of receiving text messages. This means that not only is there no up-front cost, but also that there is no risk that you will suddenly incur huge fees for exceeding transfer quotas when you are lucky enough to be Slashdotted. Your application will just be temporarily unavailable. Hey, if that service level is good enough for Twitter, it's good enough for you, right?

The Open Source Game

Starting an open source project has been free for years. Many project hosting communities exist, providing projects with source code management, issue tracking, wikis, mailing lists, other features, and even a little visibility within their directories. Web hosting is not exactly expensive, especially light-weight scripting languages like PHP, Perl, and Python; but it still costs something and therefore requires a sponsor. Often times for a fledgling project the sponsor is also the founder, who also happens to be the lead programmer, help desk, and promoter. There are some who do an amazing job of doing this, but for the most part I think project founders choose to wait.

Note: If I am wrong about the availability of good, free hosting for open source web applications, please provide links to them in the comments section. I would love to be proven wrong on this.

The Startup Game

If Paul Graham were to comment on AppEngine, it probably be that it eliminates one of the last remaining excuses anyone may have to launching a web startup. If you have an idea, some time, and can hack Python – you can launch a web application with AppEngine. You don't need money, infrastructure, long-term commitment, or any of those other things that may scare a person away from a startup. With AdSense, you even monetize your application without hardly any effort.

Of course there are limitations. For one thing, AppEngine is very limited in what it can do. Pretty much any idea with even moderate bandwidth, storage, or computation requirements that cannot be delegated to another web application (e.g. embedding YouTube functionality) is out. So, unless you plan on building a business based on mashing together existing applications, then AppEngine probably is not your free lunch. That being said, I fully expect that Google will gradually add APIs for all of its applications to AppEngine, thereby providing a very powerful base for new-but-thin ideas.

The R&D Game

Just for a moment, let's say there was a company that required all of its employees to spend 20% of their time working on a project other than their primary job. This is a company that is betting that it cannot predict which idea is going to be the next big thing, so it must try lots and lots of things and see what sticks. As this company grows, providing infrastructure for those projects would become a real challenge. Especially providing infrastructure that allows them to be testing in the wild as opposed to simply internally, and allows the projects to instantly scale if they just happen to turn out to be a success. This company would also want to ensure their employees weren't slowed down by having to deal with muck. Such a company would need an infrastructure that could scale to support many, many applications without burdening infrastructure teams. Such a company would need AppEngine.

Now extend the idea further. Let's say it doesn't really matter whether the next big thing was developed by an employee or not. What matters is that the next big idea is bound to the company, for example by using the company standard authentication mechanism or, more importantly, the company standard monetization mechanism.

Ok, so we all know what company I'm talking about. AppEngine allows Google to outsource some of their R&D at very low cost, and given that most successful AppEngine developers will probably choose AdSense to monetize their creations, Google stands to profit regardless of whether they ever pay any fees or not. In cases where creators do pay hosting fees, the great Google gets paid twice.

The Recruitment Game

Distinguishing among potential recruits is very challenging. The accomplished academic is almost certainly smart, may fall apart when asked to work with a significant team on something important to the company rather than a research interest. The industry veteran may have great corporate experience, but political skills could be masking shallow or outdated technical skills. The bottom line is recruiting is hard because in most cases you never see a direct sampling of an individual's work. At best you can see what a team he was on produced and take at educated guess as to his contribution. Open source projects can provide more information, but for most programmers there is no real motivation to participate in such projects.

AppEngine provides more motivation to the programmer, because he can more readily show his creation to other people without incurring any cost and there is always a chance that he will make some money. There are probably a lot of talented “almost founders” out there who would start a company, but perhaps lack some of the personality traits necessary to do so or found themselves in a situation where they need a steady income stream that simply isn't available for the founder of an early-stage startup. These individuals, and others, will make great pickings for Google.

Conclusion

Long term, in order for Google to grow, it has to attract more people to spend more time on sites displaying AdSense advertisements. Over the past few years Google has come out with countless new online services, most of which on still in beta, and none of which has yielded anything close to the monetization potential of their search business. AppEngine allows them to vicariously service the long tail without continuously expanding their already highly diverse R&D efforts. As a nice added bonus it will provide the opportunity to acquire future high-value startups before they are even real startups by simply hiring the developers and maybe tossing some stock options their way. On the flip side, I don't think AppEngine is going to have much effect on mainstream web application hosting. The API is too constrained and for a company with resources the ties to Google are most likely too tight. So I predict AppEngine will be more of a muted success for Google. The infrastructure built for it will be useful internally, it will help them get more sites up more quickly addressing eclectic needs without burdening employees, and it will provide a proving ground for future hires and possibly very early stage acquisitions. This could all add up to AppEngine being a very significant success, but it also means the success may out of the public eye.

Sphere: Related Content

Friday, February 08, 2008

Linguistic Success

There's a new favorite pastime on the fringes of the Scala community. That pastime is blogging about aspects of the Scala language that will prevent it from being a "success" unless they are quickly addressed. "Success" is roughly defined as "widespread commercial use." A good metric might be: "How hard is it to find a job programming Scala?" The premise of the criticism usually revolves around one or more complicated, terse, or "foreign" (relative to the author) constructs that are common Scala, or at least favorites among frequent posters, and how these constructs will prevent "average" programmers from understanding Scala and thereby prevent commercial adoption. A recent favorite is symbolic function names (/: and :\ for folds).

The logic of this argument seems relatively sound. When choosing a technology for a project, it is very important to consider the availability of potential employees who know that technology. Forcing every new member to scale a potentially steep learning curve is a frightening prospect. I can imagine development managers lying awake at night fearing that their expert in some obscure technology will leave, and it will take months to replace him. It's a legitimate, although I think slightly exaggerated, fear.

That being said, I think it has little to do with the adoption of a new language. The majority of programmers simply do not spend their free time learning new languages. Many, maybe most, won't even use their free time to learn languages that they know have ready pre-existing demand. They learn when they are paid to learn, or when failing to learn will lead to immediate negative consequences for their career. I think the same can be said of most professions. Most people work because they have to, not because they want to.

Consequently, I think expanding beyond a core of enthusiasts is very difficult, if not impossible, simply be attracting people to the language. Right now some leading-edge Java people are taking a look at Scala, because they think it might be the next big thing in Java-land. These people are different than enthusiasts. Enthusiasts will learn a language for the sake of learning it. The leading-edge folks learn it as a high risk investment. If they can get a head start on the next big thing, it will be great for their careers (and businesses). These people constantly think "can I sell using this technology?" and "if I do sell it, while it come back to haunt me?" This is a very pragmatic perspective, and it is the perspective I take when I'm at work.

Confusing, odd-ball language features make the sell a lot harder. Pitching them as features increases personal risk.

But it doesn't matter.

Why? Because the vast majority of developers are not going to learn a new language because they want to, they are going to learn it because they have to. Not to mention that there are countless languages out there, so going after the enthusiasts and leading-edgers (who are mostly lookers anyway) is just fishing in an already over-fished pond.

So how does a language become a success?

Enough people use it for real projects. Let's say a consultant rapidly prototypes an application using the technology, and that application makes it into production. Now maintenance programmers have to learn that technology. Sure, it's complicated, but unlike the guy learning it on free weekends, the maintenance programmers have all-day every-day. It's hard to learn complex concepts a hour or two at a time, but these guys have all day, and the next day. It's hard to memorize something by looking at it a couple hours a week, but spend all day staring and it and it will click. Not to mention that their livelihoods depend on it. Any sort of "cowboy" development team can cause this to happen, and frankly such teams are pretty common.

So maybe one-in-five maintenance programmers actually like the technology, and admire the cowboys, so when they go get a chance to do new development, they use it, too.

The same thing can happen with products from startups. Let's say a startup builds a piece of enterprise software using Scala. They sell it to big, conservative companies by emphasizing the Java aspect. They sell customization services, too. And then it's back to the maintenance programmer, who has no choice.

Notice a pattern? The key to language success is making it powerful enough for a couple cowboys to do the work of an entire team in a shorter period of time. Selling fast and cheap is easy. If you have enough fast and cheap, the business people won't care if you are making it out of bubble-gum and duct-tape, because you are giving them what they want.

The key to success is making the reward justify the risk. Judging by what some people using Scala for real-world projects are saying, and my one hands-on experience, I think Scala offers it. It's just a matter of time before it sneaks its way into enterprises, just like Ruby has.

Sphere: Related Content

Wednesday, January 02, 2008

Open Source, Cost, and Enterprise Product Adoption

This is a relatively common topic, and today is was raised on TSS, as a result of a blog by Geva Perry:

Are developers and architects becoming more influential in infrastructure software purchase decisions in large organizations?

...with an implied "as a result of open source software." It's an interesting question, and I think it can be generalized to: What effect does license costs have on the acquisition of enterprise software?

In considering this question, it is important to remember that:

  1. In large organizations, money often comes in many colors, such as: expense, capital, and internal labor
  2. The level of authority an individual has depends on both the amount and the color of the money involved
  3. Certain colors of money are easier to obtain than others, and sometimes it varies dependent on the amount
  4. Accounting rules, both standard and self imposed, effect what can and cannot be purchased with a given color of money
In a nutshell, the budgeting process for large organizations can be extremely idiosyncratic. Not only do the official processes vary, but individual personalities and budget cycles can have profound effects.

So the first effect the cost of a piece of enterprise software has is to limit the various buckets of money that can be used to pay for it. However, this can be a very complex process. Let's assume any given piece of software typically has the following types of cost:

  1. One-time license cost (both for the application and support infrastructure, such as the OS and DBMS)
  2. Recurring maintenance and support cost
  3. Hardware cost (e.g. a server)
  4. Internal labor for development and deployment
  5. External labor for development and deployment

The lower the costs involved, the less approval is required. Driving down license costs pushes the initial acquisition decision closer to the users and/or developers. This is a big win for open source applications. It's probably a bigger win for application vendors. For example, most enterprise applications require a DBMS, such as Oracle. Oracle is not famous for being cheap. So let's say your potential customer can readily obtain $100k to spend on software licenses. If you are a software application company, do you want that money as revenue, or do you want 2/3 of it to go to Oracle and IBM?

I'll give you a hint. You want a department to be able to deploy your software without cutting a check to the big boys, but you want to be able to say "Yes, we support your enterprise standards" to the big-wigs in the IT department who think that there isn't a major conference for a piece of software, then it shouldn't be on the network. That way your product can be approved, and then the running it on Oracle can be deferred until "usage levels warrant it."

Hardware costs are even more interesting. At my employer, equipment that costs $5k or more is "capital," and less than that is expense. Capital is generally easier to obtain if (1) you know you need it a year in advance, or (2) it's the end of the year and there's still money laying around. It is impossible to obtain at the beginning of the year, when everyone thinks that they will actually spend their allocation, unless of course it was approved last year. Conversely, expense money is much more plentiful at the beginning of the year, when managers are still dreaming of sticking to their budgets, and becomes more and more difficult to obtain as reality sets in. So what's the point? Well, you want your product to require a small enough amount of hardware so that a first or second line manager can purchase it on expense without obtaining approval, but also have a recommended configuration that costs enough to be purchased on capital

This is interesting because big-iron Unix guys will often lament about how their systems have such a lower TCO than x86 Wintel or Lintel systems, so all the arguments about x86 systems being "cheaper" is bunk. What they are ignoring is that it is much easier to spend $5k on twenty small things (plus setup labor on each of those twenty items) than it is to spend $50k on one big item, because the $50k either has to be approved a year in advance or it has to wait until someone else's project falls apart so they can't spend the $50k. The "total" in TCO is completely ignored, because very few people think about the few hours that each of those servers requires to setup.

Now, about labor costs. Managers generally have at least some discretion over how their direct reports spend their time. If you actually think about it in terms of the fully burdened hourly cost of an employee, managers often have significantly more "budget" under their control by their means to direct how time is spent than they do for purchasing licenses and services. Again, this is a big win for open source.

The bottom line is that the best way to get your foot in the door is to have the lowest marginal cost of deployment as possible. I'll offer as evidence the countless wikis that have popped up around my workplace, very few of which are even officially approved.

Of course, this makes you wonder why the marginal cost of deploying a piece of enterprise software tends to be so high. Why aren't more vendors practically giving their software away for small deployments? Well, many do, such SQL Server Express and Oracle XE. But there's still more that don't. The problem is that it's really hard to get the total cost of initial deployment down to below the point where the bureaucracy kicks in, and once in kicks in it helps to be more expensive.

Yes, that's right, I said more expensive.

You see, these days one of the great ways to make your career in IT is to be a good negotiator. The combination of off-the-shelf software and outsources have shifted IT expenses from being dominated by internal labor to being dominated by procurement contracts. However, you don't build and illustrious career by negotiating $10k contracts. Likewise, once you pass a relatively small threshold someone decides that the project needs a "real" project manager, instead of just an "interested" manager or a technical lead, and project managers measure are unfortunately measure more by the size of their projects than the value that they deliver. (yes, that's right, screwing up a multi-million ERP implementation is better for your career than successfully deploying some departmental application under budget and on schedule)

In other words, once the signatures of additional people are required, you have to have something big enough for those people to consider it worth their time. So, if you are a software vendor, or an internal employee planning a deployment project, then you either need to go really small and viral or really big. Medium size projects are simply too hard to push through.

And, in my opinion, that's really a shame, because medium sized projects tend to have the best value proposition. Small ones involve too little of the organization to have a significant impact, and large ones become two unwieldy to meeting objectives. Projects need to be large enough to do things right in terms of technology and engage future users, but small enough have readily apparent benefits and incremental deliveries that provide real value (not simply "look, it's a login screen!").

Maybe it's different in other organizations, but somehow I doubt it. However, I'd be really interested in knowing what others' experiences are.

Sphere: Related Content

Monday, September 17, 2007

The Tar Pit

The editor of TSS has decided to run a series discussing The Mythical Man Month, by Frederick Brooks. Hopefully it will produce some good discussion. There are a lot of Agile advocates that hang out on TSS that really could use to (re)learn some of the lessons of software development. I make a point of rereading it every few years lest I forget the lessons learned by computing's pioneers. The first chapter - The Tar Pit - contains on of my favorite concepts, as illustrated by the graphic below (slightly changed from original): Programs Let me explain. A program is what we all have developed. It's simple piece of software that is useful to the programmer and/to some set of users who are directly involved in defining its requirements. Most bespoke departmental and a substantial portion of enterprise applications fall into this category. They are sufficiently tested and documented to be useful within their originating context, but once that context is left their usefulness breaks down quickly. In addition, they are not solidly designed to be extensible and certainly not to be used as a component in a larger system. Obviously this is a range, and I've really described a fairly well developed program - one almost bordering on a programming product. That script you wrote yesterday to scan the system log for interesting events that has to be run from your home directory using your user account in order to work is also just a program. Programming Products Moving up the graph, we hit programming product. In theory, all commercial applications and mature bespoke applications are programming products. In practice this isn't really the case - but we'll pretend because they are supposed to be and I increased the standard over what Brooks originally described. The big challenge with programming products is that, according to Brooks, they cost three times as much to develop than simple fully-debugged programs yet they contain the same amount of functionality. This is why it's so hard to get sufficient budget and schedule to do a project right. The difference between a solid piece of software and something just cobbled together is very subtle (you can't tell in a demo) yet the cost difference is quite astounding. Consequently, I think most commercial applications are released well before they hit this stage, and bespoke ones require years to mature or a highly disciplined development process to reach this point. Programming Systems Programming systems are programs intended to be reused as parts of larger systems. In modern terms, they are libraries, frameworks, middleware, and other such components that are all the rage in software development. Like programming products, programming systems are thoroughly tested, documented, and most importantly are useful outside of the context in which they were created. And, like programming products, according to Brooks they take three times as long to develop as a regular program. Developing programming systems for bespoke applications or niche use can be a tar pit all its own. For one, many programmers like building libraries and frameworks. The problems are more technically challenging, and there is no strange-minded user to consider. The programmer and his colleagues are the user. Programming systems are relatively common in groups that execute a lot of similar projects and/or that contain programmers who really want to build components. Programming System Products Amazingly, programming systems products are relatively common - even if there really aren't that many of them. As you've probably guessed, a programming system product has all the traits of both a programming product and a programming system. It is useful to a wide range of users and can be effectively extended and/or embedded for the creation of larger systems. It has complete documentation and is extensively tested. Where are these wondrous things? Well, you are using one right now (unless you printed this). Your operating system is one. It both provides useful user-level functions and a huge amount of infrastructure for creating other programs. MS Office is one as well, because it has a pretty extensive API. Most commercially developed enterprise systems should be programming system products, because:

  1. They provide off-the-shelf functionality for regular users
  2. Customers always customize them
  3. They often must be integrated with other products
  4. Simply integrating their own components would go better with a programming system
The problem is that they are not, because of: The Tar Pit Brooks didn't explicitly write this definition of The Tar Pit but I think he would agree. Put yourself in the position of a development manager at a startup or in a larger company about to launch on a new product. On on hand, you want to make the product as good as possible. You know that what you develop today will serve as the base for the company/product line for years to come. It needs to be useful. It needs to be extendable. It needs to be thoroughly tested and documented... It needs to be cheap and delivered yesterday. The differences between a programming system product and a simple programming product are far more subtle than the differences between a program and a programming product. But the programming system product costs a full NINE TIMES as much to develop as the program with essentially the same "outward functionality" - at least if you are a sales guy or a potential customer sitting in a demo. I think this is the struggle of all engineering teams. If the product is long lived, doing it right will pay major dividends down the line. But it can't be long lived if it is never released. It stands a worse chance if it comes out after the market is flooded with similar products (actually, that's debatable...). The ultimate result is a mish-mash of tightly coupled components that, as individuals, fall into one of the lesser categories but as a whole fall down. There is a documented API, but the documentation isn't really that good and the application code bypasses it all the time. The user documentation is out-of-date. Oh, and the application isn't really that general - hence why all the major customers need the API so they can actually make it work. Escaping the Tar Pit Ok, so if you develop a large system you are destined to fall into the tar pit because cheap-and-now (well, overbudget and past schedule) will override right-and-the-next-decade. You need a programming system product, but budget and schedule will never support much more than a program. So how do you escape it? Accept It Products that give the outward impression of being far more than they are often sorely lacking in conceptual integrity. If you are building an application - build it right for the users. Remember you can build it three times for the cost of building a programming system product. Maybe by the third time there will be budget and schedule for it. Or maybe, just maybe, you can evolve your current system. But pretending will just make a mess while wasting significant amounts of time and money. Partition It Some pieces of your system are probably more important than others. There are places where requirements will be volatile or highly diverse amoung customers - those are the places where you need a truly extensible system. You should also be able to reuse strong abstractions that run through your system. The code associated with those abstractions should be top-notch and well documented. Other pieces just need to be great for the users, while a few that remain need to be convenient for yourself to extend or or administrators. Open Source It Find yourself developing yet another web framework because what exists just isn't right? Open source it. This isn't really my idea. It's what David Pollak of CircleShare is doing with the lift web framework for Scala (actually, I'm guessing at David's reasoning, I could be wrong). The infrastructure for your application is essential, but it isn't your application. It is what Jeff Bezos refers to as muck. You have to get it right, but it's distracting you from your core mission. So why not recruit others with similar needs to help you for free? That way you don't have completely give up control but also don't have to do it alone. Theoretically the same could be done for applications. Many large customers of software companies have significant software development expertise - sometimes more than the software companies. I think it would be entirely feasible for a consortium of such companies to develop applications that would better serve them than commercial alternatives. But I have yet to convince someone of that...

Sphere: Related Content

Tuesday, August 14, 2007

Business Engagement and IT Project Failure

CIO.com recently ran an article by the CIO of GE Fanuc regarding "functional engagement" (i.e. "the business" or more commonly "the users") and project failure. Michael Krigsman later posted a more succinct summary on his ZDNet blog. Here's a even shorter version:

  1. Make sure a single business-side has significant capability, responsibility, and authority for project success.
  2. Don't short-circuit important parts of the project
  3. Make that person understands the "laws of IT" and defends them to his peers, subordinates, and superiors
#1 is just plain common sense. #3 amounts to establishing a scape goat for the inevitable failure caused by short-circuiting appropriate systems engineering or architectural activities. Notice that there is neither a definition of "failure" nor a definition of "success." I think it can be inferred that he's defining "failure" as "exceeding project or maintenance budget and/or schedule." Not once does he mention delivering real innovation to the business or even real value - just avoiding causing massive amounts of pain. Consider the following statement:
Enterprise platforms like Siebel, Oracle and SAP are not intended to be heavily customized. When going from a niche, custom application to Siebel, you need a strong functional leader to push back on every “Yeah, but” statement. They must start at zero and make the business justify every customization. Saying “no” to customization is bitter medicine for your business partners. They will make contorted faces and whine ad nauseum. But it is for their own good.
Let's think for a moment. Your customer (internal or external) wants to spend millions of dollars implementing a CRM (or ERP, PLM, etc.) package. Earlier, it went to the trouble of building one from scratch. That probably means it considers CRM to be an extremely important part of its business, and it expects to derive a competitive advantage from having a shiny new system. The best way to be competitive is to copy what everyone else is doing, right? Also, repeatedly receiving "no" as an answer will really make your customer want to take an active role in the implementation, right? Hmmm....somehow I don't think so. Introducing a strong impedance mismatch between the organization and the software it uses is failure. Standing in the way of innovation is failure. Mr. Durbin wants you to fail. There is actually a simple, albeit occasionally painful, solution to this problem: don't buy an off-the-shelf application that neither meets the business's requirements nor can be cost-effectively extended to meet them. First, you have to understand your business processes and requirements. I mean really understand them, not just understand the official process documentation that no one follows. All those winces and "yes buts" are requirements and process details that absolutely must be understood and addressed. Second, you do a trade study. This is a key part of systems engineering. Replacing enterprise systems is expensive, often prohibitively expensive, so do a thorough analysis. Take some of your key users to training sessions and write down every wince and vaguely answered question, because those are all either issues that will stand in the way to delivering value or will require customization. Finally, keep your pen in your pocket. Don't be tempted to write checks, buy licenses, and sign statements-of-work before your really understand the product. Just ignore those promises of discounts if you get a big order in before the end of the quarter. The next quarter isn't that far away, and the end of the fiscal year may even be approaching. The discounts will reappear then. Instead, make sure the vendor really understands the requirements and business process, and structure any contracts in a way that they must be met or the vendor will face significant penalties. It's amazing what comes out of the woodwork when you do this. All of a sudden that 3x cost overrun from the original projection is sitting right there in front of your eyes in the form of a quote, all before you've sunk any real money into the project. The purpose of engaging "the business" in an IT project is not to create a personal shield for all the deficiencies in the system you are deploying. It is to make sure you identify those deficiencies early enough in the project cycle to avoid cost and schedule overruns.

Sphere: Related Content