Monday, August 10, 2009

You mean the Sun doesn't go around the earth?

A Nature Conservancy blogger says that scientists are to blame for an anti-science country (the part about writing passively is amusing):
Between 2002 and 2007, nearly 32,000 Ph.D.s in science were awarded in the United States. These not so-young Ph.D.s (median age for receiving a Ph.D. is 33) are trained to become like their mentors — college professors, even though at best only one in 10 will actually land a tenure-track job. And that was before the recession. These scientists are deft at statistics and experimental design, and have been schooled in writing passively, without adjectives or storyline or anything that could capture the interest of anyone other than the 17 other specialists working on the same research topic.
He even talks about C.P. Snow at the end, who's famous lecture is 50 years old. Snow was the topic of one of my first posts.

Finally, during jury duty in April and May I spent a lot of time looking at the Security Bank building out the window. They look like great lofts, but pricey.

More UCM woes

An article from Inside Higher Ed is about the newest UC campus at Merced. Putting on my three-time UC alum hat, it should have been in or near Fresno. Clearly it's not a matter of being in the valley or close to a CSU (UCD and Sac State seem to be doing OK). I remember the UC president at the time being tired of hearing from Fresno-area alumns about picking the wrong site :)

Saturday, August 01, 2009

Physician envy?

Steve's thoughts about software development as a stochastic art (you should read it) got me thinking again about how software developers look to the medical field for inspiration, or at least metaphor, or analogy?) But the question that has nagged me for a couple of decades is why we would want to model ourselves after an industry in which all the users die and is running us out of money :)

Some random thoughts about health care and software development:
  • Back in 1992, Tom McCabe and Charles Butler suggested that cyclomatic complexity and essential cyclomatic measures could be used like blood pressure measurements to determine the health of code. A high cyclomatic complexity can be treated by restructuring the code (abstraction, essentially), but a high essential complexity is more difficult to address since it means that the essence of the code's structure can't be reduced beyond this point. There's a scanned PDF version here, but you'll have to scroll about 60% down, and look for "A clinical approach to reverse and reengineering" (IEEE Software, January 1992).

  • Over thirty years ago chapter three of Fred Books' The Mythical Man-Month describes the surgical team (aka Chief Programmer Team) organization for software development. Amazingly, this is yet another idea that originally came from Harlan Mills, a great thinker who most developers never heard of (chief programmer teams, cleanroom software engineering, structured programming, ...)

    After reading Brooks back in the day, I first started thinking about the big "individual differences" in programing performance and how to leverage that (McConnell has a nice discussion).

  • Although I think he gets a little "out there" at times, in Jim McCarthy's Dynamics of Software Development is suggestion that we "Be more like the doctors". I can't find my copy right now, but a good quote is here:
    For now, we really need to learn to be like doctors. They are able to say, quite comfortably and confidently and with conviction, "These things are never certain." Doctors seldom if ever state with certainty what the outcome of any procedure might be. Yet software managers, operating in a far less disciplined and less data-driven environment... blithely promise features, dates, and outcomes not especially susceptible to prediction.
    Interestingly, the uncertainty that McCarthy cites is a motivation of the evidence based medicine (and evidence based software engineering) movements.

    You can watch a really old video of McCarthy giving his famous "23 Rules of Thumb" here. You can tell it's an old video since they talk about consultation fees being $100/hour:)

  • Anecdotally, when expert systems burst on the software stage, the software was better at diagnosing rare diseases than human physicians were. I'll have to try to find some citations. But is there something in that we can transfer to software development? As I remember, Feigenbaum's systems being pretty good, but Lenat's CYC not being so good at diagnosis (concluding that Lenat's rusted-pocked car had chicken pox :) Britannica has an intriguing summary of Feigenbaum's work:
    Experience with DENDRAL informed the creation of Feigenbaum’s next expert system, MYCIN, which assisted physicians in diagnosing blood infections. MYCIN’s great accomplishment lay in demonstrating that often the key is not reasoning but knowing. That is, knowing what symptoms correspond to each disease is generally more important than understanding disease etiology. At a basic level, MYCIN also demonstrated that the means of navigating the reasoning tree and the contents of the different branches can be treated separately.

  • Finally, one of my favorite examples of a disconnect between software developers and physicians is here: "Building an Information System for Collaborative Researchers: A Case Study from the Brain-tumor Research Domain". A lot of stuff to think about, unfortunately I can't find a free copy to link to. But if you have access to Science Direct or the ACM Digital Library you can read it.

Bottom line -- for anyone working in requirements, the paper above is probably the most important thing in this post. The other big ideas to think about are putting software development (and medicine) on a firm evidence-based foundation.