Thoughts on System Design and the 'Organic' Emergence of Security Requirements


I've been thinking about the classic 'InfoSec Triad' lately (Availability, Integrity, Confidentiality) and how challenging it is to convey the real-world value of these concepts during real-world discussions on how to 'secure' a system. The InfoSec industry could stand to improve by providing clear provenance for each 'Security Requirement' that gets proposed. Most security professionals struggle to convey information security theory to non-specialists.

In this post I explore a 'hypothetical system' and attempt to show where security concerns become surfaced during the lifecycle of a growing software system. Not all systems will have the opportunity to grow to a point where designers need to seriously consider the more complicated an 'fun' security angles.

Security principles can seem disconnected from security theory and the tie-back to concrete security requirements are often unexpressed. If systems are to provide meaningful value, my opinion is that they must be rooted in availability and trust. 'Security' concerns, requirements and controls emerge as a system progresses on its journey. Like the 5 stages of grief, progression may not be 'linear' and stages can and often will be repeated as a product, service or solution evolves (for example, most people would consider more functionality to be an expectation over time). :

Functionality (Does it do something useful?)

  • The system will be designed and engineered to address one or more specific problems, issues or needs that the designers / decision makers have

Usability (Can the target audience make use of the functionality?)

  • If the audience can't use the tool or system, it becomes hard to leverage any potential functionality that may be there (UX)
  • identification of the target audience to some acceptable level of approximation is needed
  • If multiple audience personas are identified, it would be wise to consider any differentiation in capability between user personas
  • Does there need to be differentiation between specific users within the personas (individual state) or is 'global' state sufficient for the system (i.e. wiki vs google docs)

Reliability (Is the system accessible and available to your target audience?)

  • Consistency is an element of reliability - does the system behave predictably? Can users trust that it will behave?
  • Quality assures that expectations are met across multiple axis (stakeholders, designers, implementors, users, auditors)

Recovery when the system fails

  • All systems eventually fail. As a system emerges and evolves these edge and corner cases will be encountered and need to be accounted for
  • Business Continuity enters the picture here: What do we do if the system is 'unavailable'?
  • Backup and Restore become thoughtful considerations. How much data do we need to store so we can recover to a good state? What is the mechanism or way that we do this?

Load/Scale concerns emerge as a system becomes popularized and gains an audience

  • Often times there are limits to the original design which made sense at the time yet turn out to be bottlenecks when scaling becomes costly
  • In todays climate, a consideration is ensuring access by legitimate users while preventing service mis-use by other parties (web scrapers, competitors, threat actors, etc...) as any use of the service outside the intended audience translates to increased cost for the business

Reproducibility & Trustworthiness: As a system transforms into a 'System of Record' for whatever it is it was designed to do, additional requirements are placed under the microscope:

  • Non-repudiation - Can we produce strong evidence to refute false claims?
  • Tamper-resistance - Do we implement techniques which validate the integrity of our system as a whole, in its component parts as well as the data contained within?
  • Tamper-evidence - Do we have a mechanism to detect if the system one of its component parts or the data contained within it have been tampered with? This strikes at the heart of system integrity

Incident-driven maturity: After a system encounters its first 'major incident' (of any variety, complex systems tend to have many failure modes that are often blamed on 'customer mis-use' or bad actors... yet really come down to the rough edges in systems are designed and implemented... even safety critical systems)

  • How do we know actions are performed by legitimate users (MFA/2FA/Other)?
  • How do we know what 'normal' looks like? And how can we respond if we detect 'abnormal' behavior? (drift detection, anomaly detection, logs, instrumentation, monitoring, alerts, response runbooks)
  • Is the system designed to repel attempts to behave outside well-defined user personas (i.e. 'elevation of privilege' or 'code execution')? (security architecture at the core)
  • How much do we know about our suppliers and their ability to satisfy our needs in the event of an incident? are they trying to prevent incidents?
  • Do we even know what we have to protect? (i.e. 'asset inventory')
  • Do our people even know what to do if there is a disaster or even just an incident (runbooks)?
  • Do we even have people tasked with instrumenting and observing the system? If so, do they have an understanding of the system (product requirements/documentation/tribal knowledge)

Large Enterprise and Government: These customers have checklist that they need you follow which encompasses some/many of the considerations listed above.

  • Typically there is a need for a 3rd party to attest that the system behaves as expected and that it complies with relevant laws, regulations and contractual agreements
  • This adds extra administrative activities on to the team that manages the system and usually results in either standing contracts with outside parties and/or hiring dedicated employees to handle the details of demonstrating conformance to large enterprise and government requirements.

This was an exploration of ideas and is not meant to rewrite the CBK - security people should be part of the team and understand what a system does why it is designed the way that it is and collaborate with stakeholders, designers, implementors and others to practically apply 'information security theory' in an approachable and digestible manner. Checkbox driven security is pointless. 90s-centric solutions to security are cumbersome, over engineered and lack business insight.

Information Security as a profession can provide a tremendous amount of value to the business if we seek understanding, relate principles appropriately and build relationships of trust with systems professionals. Here's hoping 2023 brings more collaboration and more resiliency against bad actors and mis-behaving systems. :)

Happy new year!