Software crisis

related topics
{system, computer, user}
{theory, work, human}
{company, market, business}
{math, number, function}
{ship, engine, design}

Software crisis was a term used in the early days of computing science. The term was used to describe the impact of rapid increases in computer power and the complexity of the problems which could be tackled. In essence, it refers to the difficulty of writing correct, understandable, and verifiable computer programs. The roots of the software crisis are complexity, expectations, and change.

The term "software crisis" was coined by F. L. Bauer at the first NATO Software Engineering Conference in 1968 at Garmisch, Germany.[1] An early use of the term is in Edsger Dijkstra's 1972 ACM Turing Award Lecture[2]:

The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.

Edsger Dijkstra, The Humble Programmer (EWD340), Communications of the ACM

The causes of the software crisis were linked to the overall complexity of hardware and the software development process. The crisis manifested itself in several ways:

  • Projects running over-budget.
  • Projects running over-time.
  • Software was very inefficient.
  • Software was of low quality.
  • Software often did not meet requirements.
  • Projects were unmanageable and code difficult to maintain.
  • Software was never delivered.

Many of the software problems were caused by increasingly complex hardware. In his essay, Dijkstra noted that the newer computers in his day "embodied such serious flaws that [he] felt that with a single stroke the progress of computing science had been retarded by at least ten years"[2]. He also believed that the influence of hardware on software was too frequently overlooked.

Various processes and methodologies have been developed over the last few decades to "tame" the software crisis, with varying degrees of success. However, it is widely agreed that there is no "silver bullet" ― that is, no single approach which will prevent project overruns and failures in all cases. In general, software projects which are large, complicated, poorly-specified, and involve unfamiliar aspects, are still particularly vulnerable to large, unanticipated problems.

See also


Full article ▸

related documents
Object modeling language
Subsumption architecture
Source separation
Internet dynamics
Systems design
Communications in Albania
Data processing
Thinking Machines
Wide Area Telephone Service
Jay Miner
IBM Lotus SmartSuite
Electric power control
Enlightenment (window manager)
Sinclair Scientific
Part 68
Communications in Vietnam
Distributed Component Object Model