Trusted system

related topics
{system, computer, user}
{law, state, case}
{theory, work, human}
{math, number, function}
{service, military, aircraft}
{company, market, business}

In the security engineering subspecialty of computer science, a trusted system is a system that is relied upon to a specified extent to enforce a specified security policy. As such, a trusted system is one whose failure may break a specified security policy.


Trusted systems in classified information

Trusted systems are used for the processing, storage and retrieval of sensitive or classified information.

Central to the concept of U.S. Department of Defense-style "trusted systems" is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is (a) tamperproof, (b) always invoked, and (c) small enough to be subject to independent testing, the completeness of which can be assured. Per the U.S. National Security Agency's 1983 Trusted Computer System Evaluation Criteria (TCSEC), or Orange Book, a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.

The highest levels of assurance were guaranteed by significant system engineering directed toward minimization of the size of the trusted computing base, or TCB, defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy.

Because failure of the TCB breaks the trusted system, higher assurance is provided by the minimization of the TCB. An inherent engineering conflict arises in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB. This may lead to some philosophical arguments about the nature of trust, based on the notion that a "trustworthy" implementation may not necessarily be a "correct" implementation from the perspective of users' expectations.

In contrast to the TCSEC's precisely defined hierarchy of six evaluation classes, the more recently introduced Common Criteria (CC)—which derive from a blend of more or less technically mature standards from various NATO countries—provide a more tenuous spectrum of seven "evaluation classes" that intermix features and assurances in an arguably non-hierarchical manner and lack the philosophic precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support—even encourage—a intermixture of security requirements culled from a variety of predefined "protection profiles." While a strong case can be made that even the more seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.[citation needed]

Full article ▸

related documents
Content-control software
Systems engineering
Tier 1 network
Canadian Radio-television and Telecommunications Commission
Federal Radio Commission
Security-Enhanced Linux
Physical security
Information visualization
People in systems and control
Augmented reality
Covert listening device
MX record
Encapsulated PostScript
Resource Interchange File Format
Dynamic DNS
Motorola 68000 family
Digital-to-analog converter
Fibre Channel
Classic (Mac OS X)
Sequential logic
Memory management
Node-to-node data transfer
Beowulf (computing)
IBM 7090