This Article is taken from US-CENT
The following principles should guide the development of secure software, including all decisions made in producing the artifacts at every phase of the software life cycle.
The software should contain as few high-consequence targets (critical and trusted components) as possible. High-consequence targets are those that represent the greatest potential loss if the software is compromised and therefore require the most protection from attack. Critical and trusted components are high-consequence because of the magnitude of impact if they are compromised. (This principle contributes to trustworthiness and, by its implied contribution to smallness and simplicity, also to dependability.)
The critical and trusted components the software contains should not be exposed to attack. In addition, known vulnerable components should also be protected from exposure because they can be compromised with little attacker expertise or expenditure of effort and resources. (This principle contributes to trustworthiness.)
The software should not provide the attacker with the means by which to compromise it. Such “means” include exploitable weaknesses and vulnerabilities, dormant code, backdoors, etc. Also, provide the ability to minimize damage, recover, and reconstitute the software as quickly as possible following a compromising (or potentially compromising) event to prevent greater compromise. In practical terms, this will require building in the means to monitor, record, and react to how the software behaves and what inputs it receives. (This principle contributes to dependability, trustworthiness, and resilience.)
Events that seem to be impossible rarely are. They are often based on an expectation that something in a particular environment is highly unlikely to exist or to happen. If the environment changes or the software is installed in a new environment, those events may become quite likely. The use cases and scenarios defined for the software should take the broadest possible view of what is possible. The software should be designed to guard against both likely and unlikely events.
Developers should make an effort to recognize assumptions they are not initially conscious of having made and should determine the extent to which the “impossibilities” associated with those assumptions can be handled by the software. Specifically, developers should always assume that their software will be attacked, regardless of what environment it may operate in. This includes acknowledgement that environment-level security measures such as access controls and firewalls, being composed mainly of software themselves (and thus equally likely to harbor vulnerabilities and weaknesses), can and will be breached at some point, and so cannot be relied on as the sole means of protecting software from attack.
Developers who recognize the constant potential for their software to be attacked will be motivated to program defensively, so that software will operate dependably not only under “normal” conditions but under anomalous and hostile conditions as well. Related to this principle are two additional principles about developer assumptions.
- Never make blind assumptions. Validate every assumption made by the software or about the software beforeacting on that assumption.
- Security software is not the same as secure software. Just because software performs information security-related functions does not mean the software itself is secure. Software that performs security functions is just as likely to contain flaws and bugs as other software. However, because security functions are high-consequence, the compromise or intentional failure of such software has a significantly higher potential impact than the compromise or failure of other software.
This Article is taken from US-CENT You Can Read More From
No comments:
Post a Comment