FORTRAN

The Value of Simple Exploratory Models for Explaining Complex Behaviors

Inspired by Episode 39. CARNEGIE-MELLON SERIES No. 4 — ORGANIZATIONAL CHOICE
By Tom Galvin
Listen to Tom’s sidecast here:

 

In past seasons, we discussed the extent to which publication practices valuing journal articles above books limit our understanding of organizational phenomena. We also debated how the peer-review process and its current emphasis on ‘theoretical contributions’ sometimes limits the capacity of authors to convey the empirical richness of their studies. For example, in Episode 27 on Andrew Pettigrew’s study of context and transformation of the firm, we lamented the focus on precise empirical study at the expense of more meaningful monographic treatments of organizational phenomena. We revisited that them in Episodes 29, 30, and it surfaced a couple more times in Season 4.

Tom Galvin

Enter Episode 39, where we explore a famous 1972 article in Administrative Science Quarterly from Cohen, March, and Olsen on the Garbage Can Model of Decision Making, which contained (above all things) a fully-documented computer program written in FORTRAN 66! The article also included details of how they designed the program what its outputs were. As we discuss during the podcast, this was far from an empirical study. They designed the model solely for exploratory purposes—to demonstrate an interesting concept that could apply to actual organizations such as colleges and universities of various sizes. It struck me because present-day articles devote so little time to the models in use, either mentioning minimal details in the text or providing a summary or introduction to them in an appendix. Certainly not something that could be replicated as is copy-pasted from the journal.

While the dialogue in our episode focused on the theoretical and philosophical questions that the and the implications it has for our understanding of current organizational phenomena, I was drawn to the model itself because it recalled a long-ago forgotten project of a similar vein that I was involved in back in 1996-97.

The Need – A Tool to Aid Executive Coaching

At the time, the U.S. military’s War Colleges came to recognize that in the Post Cold War environment, military leaders needed to be strategic thinkers capable of understanding an increasingly complex global environment. Soviet Communism was no longer the driving threat, and the U.S. found itself involved in a number of smaller conflicts around the globe (Somalia, Bosnia, and others). A U.S. military trained and ready to hold the line between west and east Europe had to reorient itself to produce leaders with the skills and competencies needed for a different environment.

A team of faculty from the U.S. Army War College (USAWC) and the Industrial College of the Armed Forces (ICAF) was working on a program to coach and mentor the military’s future leaders. Leading the effort was ICAF faculty member T. Owen Jacobs, who with Eliot Jacques had developed stratified systems theory (SST). SST describes how as environmental complexity increases, a system’s complexity must increase in kind. For organizations, SST’s application has been in how hierarchical levels translate to vertical differentiation of complexity. Seven strata divided among three domains (direct, organizational, and strategic) describe how holding positions of leadership at progressively higher levels translate to higher-order responsibilities and time horizons.

The team employed SST as a basis for measuring the capacity of students in the two schools for assuming positions of higher leadership. Using available personality and psychological instruments (which at the time were limited and expensive), they measured personality traits, cognitive abilities, and emotional intelligence; analyzing the results and providing one on one feedback to the students. After doing this a couple years, the ICAF faculty determined early on that a worrisome percentage of budding senior officers only had the capacity to serve in direct leadership positions.

The problem they faced was simple. There were only four or five ICAF faculty members capable of performing the one-on-one feedback for what was a class of several hundred students. USAWC was similarly undermanned for the task. They needed to provide a way for any faculty member to interpret the results of the instruments and deliver useful one-on-one feedback to the students.

My Role

This is where I came in. I had joined USAWC a year earlier, serving in one of its non-teaching institutes as an artificial intelligence (AI) specialist doing various projects in support of the educational program. At the time, the Army had a robust AI program where captains like me went to grad school for AI and then served a utilization tour at various Army schools and research institutions. When the joint ICAF-USAWC team came looking for help, I was assigned as consultant.

I assessed that they were looking for an expert system, a fairly common AI application that often used qualitative methods. After a couple months of learning about the topic and instruments and interviewing the faculty and team, I collected a considerable amount of information about how the faculty went about their business. They generally went about their task in a certain way, looking at one instrument first to develop a quick picture, then moving on to the other instruments to look for confirming information. Because among military officers, some factors across instruments tended to correlate, they devised a lot of shortcuts. But overall, the findings they latched onto were results that seemed contradictory or unusual… in their words, ‘interesting.’ They would spend the majority of their time grappling with the unexpected, attempting to generating meaning from the results.

Unfortunately, what I recall winding up with in my data was a bunch of assertions and rules built on the exceptions but no easy sense of the whole thing fit together, such that an expert system could help non-experts derive similar conclusions. It did not help that there were disagreements among the team members about things such as what constitutes a ‘normal’ finding versus one that might be ‘concerning.’ One would like at a file and judge it one way and another would draw a completely opposite conclusion.

My approach was to build an expert system-like model that captured the rules and assertions (‘facts’ in AI-speak) that I had, and then tinkered with it to figure out all the other cases and exceptions that were not raised. Although expert systems are qualitatively-oriented, what I actually did was craft a model that looked a lot like the garbage can model. Here were a bunch of facts requiring interpretation; and there were a number of rules that did not apply perfectly but could. Running the model a few times on the de-identified data I was provided, I began developing possible rules for patterns that I believed the faculty would find ‘interesting.’ As I went through several iterations of this exploratory system, I fed ideas back to the ICAF faculty. If you found profiles that had this information, would you interpret it as ________?

By the time we held a videoteleconference a month later or so, both my ideas and the team’s own deliberations had resolved a lot of the differences in perspectives. We came to the conclusion—one I still believe was right—that an expert system was not appropriate for the task. There was too much subjective judgment involved that needed to remain. An expert system would not be able to provide a suitable or acceptable interpretation of the data as a non-expert who has been appropriately coached. Thus, work on the exploratory model I built was done, and ultimately lost in the bit-bucket in the sky as I left the USAWC in the summer of 1997.

Two Benefits of Do-It-Yourself Modeling – Innovation and Transparency

Thinking about that experience and relating it to the Cohen, et al. article from Episode 39, I can draw a couple of conclusions. First, the earlier days of computing allowed a great number of scientists and technologists to do their own programming. Those that developed programming skills had the ability to craft small-scale elegant models to help them grasp complex or uncertain phenomena. When I look at the tools available to me during my doctoral program a few years ago, I was amazed how sophisticated and powerful they were—until I learned that they were so sophisticated and powerful that they did not provide much room for creativity. I would instead turn to basic spreadsheet programs to do certain mathematical tasks ‘my way’ and it worked out much better.

The FORTRAN 66 program in Cohen’s article looks indecipherable because FORTRAN 66 was a very rudimentary programming language. I had never used FORTRAN before, but once I figured it out using on-line resources, I realize fairly quickly how simple the model was. As we remarked in various parts of the Episode, it was amazing what the authors were able to glean from the results. For me, however, I might have considered some of their findings as a bit fantastic and even questionable had I not taken the time to study the source code to see what they were doing.

The second conclusion is that there should be room for more of these kinds of articles in today’s organizational literature. I find it an unsettling pattern how many classic works of organization science that we have explored in Talking About Organizations are assessed as being unpublishable today. Cohen et al. represents a very unusual case whereby full details of a non-empirical model are disclosed in a peer-reviewed journal. Similar approaches would seem useful as a first step for exploring complex longitudinal phenomena (and I would not be surprised if there are plenty of researchers out there doing just that).

I thank Pedro Monteiro and Craig Bullis for their helpful feedback on earlier versions of this post.