cyber security secure by design

Secure by Design?

‘Secure by design’ is a term that is used by marketing people and those writing requirements statements. It is a worthy objective, but difficult to define and measure. In this brave new world of ‘cyber’ and ‘cloud’ it is assumed, by many, that secure by design must require some evolutionary leap in security architectural concepts to deliver such an objective in the 21st century.

This idea is flawed – the principles of secure design were established a long time ago and remain as true today as they were in 1975, when they were defined by Saltzer and Schroeder’s seminal work, “The Protection of Information in Computer Systems”. This set of secure design principles has been used, interpreted and re-defined many times, yet remains the basis of security architecture and is particularly applicable to security systems – the part of an IT environment that implements security functions or offers security services).

Saltzer and Schroeder Design Principles

There are eight design principles:

Economy of mechanism: This simply states that systems should be as small and simple as possible so that the design is easy to understand and test for undesirable side-effects.

Fail-safe defaults: Access should be based on a permission and be denied by default; this is sometimes called default-deny.

Complete mediation: Every access to every object must be checked for authority. This is really the key to security in most environments; the lack of this approach is one of the most common reasons for failure.

Open Design: The design should not be secret or, to put it another way, security should not depend on the attacker’s ignorance. This is actually an expansion of Kerckhoff’s Principle for cryptographic systems dating from 1883.

Separation of privilege: A system is more secure if it requires two keys to unlock it. Two-factor authentication anyone?

Least privilege: Every program and user should operate with the least set of privileges to complete the job.

Least common mechanism: Minimise the amount of mechanism common to more than one user and depended on by all users. Every shared mechanism (especially one involving shared variables) represents a potential information path between users and must be designed with great care to be sure it does not unintentionally compromise security.

Psychological acceptability: It is essential that the human interface is designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. This could be interpreted as a need to minimise user interaction with security systems as much as possible.

In addition, two further principles may be applied in some (but not all) cases:

Work factor: Compare the cost of circumventing the mechanism with the resources of a potential attacker. This is often misinterpreted to mean that, for example, an encrypted password is secure enough. Unfortunately, the availability of ‘rainbow tables’ (precomputed tables for reversing cryptographic hash functions, usually for cracking password hashes) makes this a fallacy.

Compromise recording: Mechanisms that reliably record that a compromise of information has occurred can be used in place of more elaborate mechanisms that completely prevent loss. This approach is the basis of some cybersecurity resilience approaches but is not always desirable.

Applicability

As noted above, these principles have stood the test of time and are the basis of any secure design. In fact, it is instructive to consider how ignoring these principles leads to common faults:

  • Reliance on security by obscurity
  • Promiscuous allocation of privileges
  • Assumptions about password security
  • Complex security interfaces that users ignore or bypass
  • Weak access control mechanisms

There is also food for thought when considering current trends in web/cloud authentication and authorisation mechanisms – is my Google ID really enough to authenticate me to HMRC or my conveyancing solicitor?

How can Ascentor help?

We have combined decades of experience in understanding how things work and identifying flaws in security system design. We’ve also developed our own approach to projects from design to specification – IA Inside. Perhaps we can share this experience with you?

IA Inside – Ascentor’s Secure by Design approach

IA Inside is a full lifecycle approach to building IA into the heart of your projects – it helps buyers and suppliers make IA holistic, integrated and effective. IA Inside supports government initiatives to make systems secure by design. Find out more here.

For further information

If you have found this article of interest, the Ascentor blog regularly carries articles about a range of topical cyber security issues. You might also like to keep in touch with Ascentor by receiving our quarterly newsletter. Sign-up details below.

If you’d like to discuss any aspect of IA and cyber security, please get in touch with Dave James, MD at Ascentor, using the contact details below.