Print Page | Close Window

The Secure Software Development Lifecycle

Printed From: One Stop Testing
Category: Types Of Software Testing @ OneStopTesting
Forum Name: Security Testing @ OneStopTesting
Forum Discription: Discuss All that is need to be known about Security Testing, All Security Issues and its Tools.
URL: http://forum.onestoptesting.com/forum_posts.asp?TID=71
Printed Date: 27Dec2024 at 1:59am


Topic: The Secure Software Development Lifecycle
Posted By: merry
Subject: The Secure Software Development Lifecycle
Date Posted: 15Feb2007 at 5:56pm

The Secure Software Development Lifecycle

In the traditional software development lifecycle (SDLC), security testing is often an afterthought, and security verification and testing efforts are delayed until after the software has been developed. Vulnerabilities are an emergent property of software which appear throughout the design and implementation cycles. Therefore, you need to adopt a "before, during, and after" approach to software development.

It is not possible to "test" security into software. Many statistics show that the earlier a defect is uncovered, the cheaper it is to fix. Consequently, it is important to employ many processes throughout the lifecycle.

A full lifecycle approach is the only way to achieve secure software.This article discusses the importance of incorporating and addressing security issues early on in the lifecycle. It outlines a process called the Secure Software Development Lifecycle (SSDL), which includes early placement of security quality gates. It discusses how security needs should be addressed in the software development lifecycle, starting with the earliest phases.
 

Fitting Security Testing into the Software Development Lifecycle

The SSDL represents a structured approach toward implementing and performing secure software development. The SSDL approach mirrors the benefits of modern rapid application development efforts. Such efforts engage the stakeholders early on, as well as throughout analysis, design, and development of each software build, which is created in an incremental fashion.

Adhering to the SSDL, security issues are evaluated and addressed early in the system's lifecycle, during business analysis, throughout the requirements phase, and during design and development of each software build. This early involvement allows the security team to provide a quality review of the security requirements specification, attack use cases, and software design. The team also will more completely understand business needs and requirements and the risks associated with them. Finally, the team can design and architect the most appropriate system environment using secure development methods, threat-modeling efforts, and so on to generate a more secure design.

Early involvement is significant because requirements or attack use cases comprise the foundation or reference point from which security requirements are defined and by which success is measured. The security team needs to review the system or application's functional specification.

Security test strategies should be determined during the functional specification/requirements phase. If you keep system security in mind, the product design and coding standards can provide the proper environment.

The SSDL is geared toward ensuring successful implementation of secure software. It has six primary components:

  • Phase 1: Security guidelines, rules, and regulations
  • Phase 2: Security requirements: attack use cases
  • Phase 3: Architectural and design reviews/threat modeling
  • Phase 4: Secure coding guidelines
  • Phase 5: Black/gray/white box testing
  • Phase 6: Determining exploitability

Once it's been determined that a vulnerability has a high level of exploitability, the respective mitigation strategies need to be evaluated and implemented.

In addition, a process needs to be in place that allows for deploying the application securely. Secure deployment means that the software is installed with secure defaults. File permissions need to be set appropriately, and the secure settings of the application's configuration are used.

After the software has been deployed securely, its security needs to be maintained throughout its existence. An all-encompassing software patch management process needs to be in place. Emerging threats need to be evaluated, and vulnerabilities need to be prioritized and managed.

Infrastructure security, such as firewall, DMZ, and IDS management, is assumed to be in place. Backup/recoverability and availability plans need to be in place. The focus of the SSDL described here is to address secure development processes. No matter how strong your firewall rule sets are or how diligent your infrastructure patching mechanism is, if your Web application developers haven't followed secure coding practices, attackers can walk right into your systems through port 80.

It is often unclear whose job security is. Roles and responsibilities need to be defined, as discussed in the section "Roles and Responsibilities" later on.

Attack Patterns to Apply Throughout the SSDL

  1. Define security/software development roles and responsibilities.
  2. Understand the security regulations your system has to abide by, as applicable.
  3. Request a security policy if none exists.
  4. Request documented security requirements and/or attack use cases.
  5. Develop and execute test cases for adherence to umbrella security regulations, if applicable. Develop and execute test cases for the security requirements/attack use cases described throughout this article.
  6. Request secure coding guidelines, and train software developers and testers on them.
  7. Test for adherence to secure coding practices.
  8. Participate in threat modeling walkthroughs, and prioritize security tests.
  9. Understand and practice secure deployment practices.
  10. Maintain a secure system by having a patch management process in place, including evaluating exploitability.

SSDL Phase 1: Security Guidelines, Rules, and Regulations

Security guidelines, rules, and regulations must be considered during the project's inception phase. This first phase of SSDL is considered the umbrella requirement.

A system-wide specification is created that defines the security requirements that apply to the system; it can be based on specific government regulations. One such company-wide regulation could be the Sarbanes-Oxley Act of 2002, which contains specific security requirements. For example, Section 404 of SOX states, "Various internal controls must be in place to curtail fraud and abuse." This can serve as a baseline for creating a company-wide security policy that covers this requirement. Role-based permission levels, access-level controls, and password standards and controls are just some of the things that need to be implemented and tested to meet the requirements of this specific SOX section.

OWASP lists a few security standards, such as the ISO 17799, the International Standard for Information Security Management, a well-adopted and well-understood standard published by the International Organization for Standardization. However, it has rarely been applied specifically to those concerned with managing a secure Web site. When you implement a secure Web application, information security management is unavoidable. ISO 17799 does an excellent job of identifying policies and procedures you should consider. But it does not explain how they should be implemented, nor does it give you the tools to implement them. It is simply a guide of which policies and procedures you should consider. It does not mandate that you should implement them all.

OWASP also recommends the Web Application Security Standards (WASS) project, which aims to create a proposed set of minimum requirements a Web application must exhibit if it processes credit card information. The project's goal is to develop specific, testable criteria that can stand alone or can be integrated into existing security standards such as the Cardholder Information Security Program (CISP), which is vendor- and technology-neutral. By testing against this standard, you should be able to determine that minimal security procedures and adherence to best practices have been followed in the development of a Web-based application.

Another such company-wide security regulation could state, for example, "The system needs to consider the HIPAA privacy and security regulations and be compliant" or "The system will meet the FISMA standards" or "The system will be BASEL II-compatible" or "The system needs to meet the Payment Card Industry Data Security Standard" or "We have to abide by the Graham-Leech-Bliley (Financial Modernization) Act," to name just a few. Sometimes a company is required to adhere to numerous such standards, and the creator of the security policy needs to consider all the requirements dictated in them.

Some systems do not fall under the purview of any regulatory acts or guidelines. In those cases, a security policy still should be developed. This article outlines the beginnings of such a security policy. For example, it is important to follow a secure software development lifecycle (SSDL) that includes secure software installation procedures and a well-thought-out patch management process.

Backup/recovery/availability are beyond the scope of this article, but they should be part of your security policy, too.

In the case of no dictated overarching security standard, you can move directly to Phase 2, Security Requirements.

It is important not only to document the security policy but also to continuously enforce it by tracking and evaluating it on an ongoing basis.

SSDL Phase 2: Security Requirements: Attack Use Cases

Security requirements are the second phase of the SSDL. A common mistake is to omit security requirements from any type of requirements documentation. However, it is important to document security requirements. Not only do security requirements aid in software design, implementation, and test case development, but they also can help determine technology choices and areas of risk.

The security engineer should insist that associated security requirements be described and documented along with each functional requirement. Each functional requirement description should contain a section titled "Security Requirements," documenting any specific security needs of that particular requirement that deviate from the system-wide security policy or specification.

It is important that guidelines for requirement development and documentation be defined at the project's outset. In all but the smallest programs, careful analysis is required to ensure that the system is developed properly. Attack use cases are one way to document security requirements. They can lead to more thorough secure system designs and test procedures.

Defining a requirement's specific quality measure helps rationalize fuzzy requirements. For example, everyone would agree with a statement such as "The system must be highly secure," but each person may have a different interpretation of "highly secure." Security requirements do not endow the system with specific functions. Instead, they constrain or further define how the system will handle any function that shouldn't be allowed. Here is where the analysts should look at the system from an attacker's point of view. Attack use cases can be developed that show behavioral flows that are not allowed or are unauthorized. They can help you understand and analyze security implications of pre- and post conditions. "Includes" relationships can illustrate many protection mechanisms, such as the logon process. "Extends" relationships can illustrate many detection mechanisms, such as audit logging. Attack cases list ways in which the system could possibly be attacked.

Security defect prevention is the use of techniques and processes that can help detect and avoid security errors before they propagate to later development phases. Defect prevention is most effective during the requirements phase, when the impact of a change required to fix a defect is low. If security is in everyone's mind from the beginning of the development lifecycle, they can help recognize omissions, discrepancies, ambiguities, and other problems that may affect the project's security.

Requirements traceability ensures that each security requirement is identified in such a way that it can be associated with all parts of the system where it is used. For any change to requirements, is it possible to identify all parts of the system where this change has an effect?

Traceability also lets you collect information about individual requirements and other parts of the system that could be affected, such as designs, code, or tests, if a requirement changes. When informed of requirement changes, security testers can make sure that all affected areas are adjusted accordingly.

Sample Security Requirements

  • The application stores sensitive user information that must be protected for HIPAA compliance. To that end, strong encryption must be used to protect all sensitive user information wherever it is stored.
  • The application transmits sensitive user information across potentially untrusted or unsecured networks. To protect the data, communication channels must be encrypted to prevent snooping, and mutual cryptographic authentication must be employed to prevent man-in-the-middle attacks.
  • The application sends private data over the network; therefore, communication encryption is a requirement.
  • The application must remain available to legitimate users. Resource utilization by remote users must be monitored and limited to prevent or mitigate denial-of-service attacks.
  • The application supports multiple users with different levels of privilege. The application assigns users to multiple privilege levels and defines the actions each privilege level is authorized to perform. The various privilege levels need to be defined and tested. Mitigations for authorization bypass attacks need to be defined.
  • The application takes user input and uses SQL. SQL injection mitigations are a requirement.

These are just a few examples. A tester who solely relies on requirements for her testing and who usually would miss any type of security testing is now armed with this set of security requirements and can start developing the security test cases.

SSDL Phase 3: Architectural and Design Reviews/Threat Modeling

Architectural and design reviews and threat modeling represent the third phase of the SSDL: threat modeling.

Security practitioners need a solid understanding of the product's architecture and design so that they can devise better and more complete security strategies, plans, designs, procedures, and techniques. Early security team involvement can prevent insecure architectures and low-security designs, as well as help eliminate confusion about the application's behavior later in the project lifecycle. In addition, early involvement allows the security expert to learn which aspects of the application are the most critical and which are the highest-risk elements from a security perspective.

This knowledge enables security practitioners to focus on the most important parts of the application first and helps testers avoid over-testing low-risk areas and under-testing the high-risk ones.

The benefits of threat modeling are that it finds different issues than code reviews and testing, and it can find higher-level design issues versus implementation bugs. Here you can find security problems early, before coding them into products. This helps you determine the "highest-risk" parts of application those that need the most scrutiny throughout the software development efforts. Another very valuable part of the threat model is that it can give you a sense of completeness. Saying, "Every data input is contained within this drawing" is a powerful statement that can't be made at any other point.

SSDL Phase 4: Secure Coding Guidelines

Secure Coding Guidelines is the fourth phase of the SSDL. A developer needs to understand in detail how vulnerabilities get into all software. She needs to learn how to prevent them from sneaking into her programs and needs to be able to differentiate design versus implementation vulnerabilities.

A design vulnerability is a flaw in the design that precludes the program from operating securely no matter how perfectly it is implemented by the coders. Implementation vulnerabilities are caused by security bugs in the actual coding of the software.
 

Static analysis tools can detect many implementation errors by scanning the source code or the binary executable. These tools are quite useful in finding issues such as buffer overflows, and their output can help developers learn to prevent the errors in the first place.

Software developers and testers attend training sessions on how to develop secure code by adhering to these secure coding standards, and each development effort should adhere to secure design and coding guidelines and standards.

Using the secure coding standards as baselines, testers can then develop test cases to verify that the standard is being followed.

There are also services where you can send your code and have a third party analyze it for defects. The benefit of using a third party is that the outside firm can validate your code security for compliance reasons or customer requirements. Because this usually doesn't take place until the code has already been developed, it is recommended that initial standards be devised and followed. The third party can then focus on and verify adherence and uncover other security issues.

SSDL Phase 5: Black/Gray/White Box Testing
 

Black/gray/white box testing is the fifth phase of the SSDL. The test environment setup is part of the security test planning. It facilitates the need to plan, track, and manage test environment setup activities, where material procurements may have long lead times. The test team needs to schedule and track environment setup activities; install test environment hardware, software, and network resources; integrate and install test environment resources; obtain/refine test databases; and develop environment setup scripts and test bed scripts.

Additionally, it includes developing security test scripts that are based on the attack use cases described in the SSDL phase 2, then executing security test scripts and refining them, conducting evaluation activities to avoid false positives and/or false negatives, documenting security problems via system problem reports, supporting developer understanding of system and software problems and replication of the issue, performing regression tests and other tests, and tracking problems to closure.

SSDL Phase 6: Determining Exploitability
 

Determining exploitability is the sixth phase of the SSDL. Ideally, every vulnerability about which the outside analyst alerts us, or which are discovered in the testing phase of the SSDL, could be easily fixed. Depending on the vulnerability's cause — whether it's a design or implementation error — the effort required to address it can vary widely. A vulnerability's exploitability is an important factor in gauging the risk it presents. You can use this information to prioritize the vulnerability's remediation among other development requirements, such as implementing new features and addressing other security concerns.

Determining a vulnerability's exploitability involves weighing five factors:

  1. The access or positioning required by the attacker to attempt exploitation
  2. The level of access or privilege yielded by successful exploitation
  3. The time or work factor required to exploit the vulnerability
  4. The exploit's potential reliability
  5. The repeatability of exploit attempts

This is where the risks of each vulnerability are used to prioritize addressing the vulnerabilities among each other and other development tasks (such as new features). This is also the phase in which you can manage external vulnerability reports, as described next.

Exploitability needs to be regularly re-evaluated because it always gets easier over time. Crypto gets weaker, people figure out new techniques, and so on.

This concludes the summary of the six phases or components that make up the Secure Software Development Lifecycle. The application is now ready to be deployed. It is imperative that you set secure defaults and understood them, and that testers verify the settings.

Deploying Applications Securely
 

The process of deploying and maintaining the application securely should occur at the end of the lifecycle. Of course, designing the application so that it can be deployed securely needs to start at the beginning. Secure deployment means that the software is installed with secure defaults. File permissions are set appropriately, and the secure settings of the application's configuration are used.

Additionally, the secure deployment has to be monitored constantly, and vulnerabilities have to be managed.

Patch Management: Managing Vulnerabilities

After you develop the software using the SSDL, it is important to put a patch management process in place to allow for managing vulnerabilities.

Tracking and prioritizing internally and externally identified vulnerabilities, out-of-cycle source code auditing, and penetration testing when a number of external vulnerabilities are identified in a component are important parts of maintaining a secure application environment.

Roles and Responsibilities
 

It is often unclear whose responsibility security really is. Is it the sole responsibility of the infrastructure group who sets up and monitors the networks? Is it the architect's responsibility to design security into the software? For effective security testing to take place, roles and responsibilities have to be clarified. In a secure software development lifecycle, you will find that security is the responsibility of many. It is a mistake to rely on infrastructure or the network group to simply set up the IDSs and firewalls and have them run a few network tools for security and consider your application secure. It is important that roles and responsibilities be defined so that everyone understands who is testing what and so that an application testing team, for example, doesn't assume a network testing tool will also catch application vulnerabilities.

The program or product manager should write the security policies. They can be based on the standards dictated, if applicable, or based on the best security practices discussed here. The product or project manager also is responsible for handling a security certification process if no specific security role is available. Architects and developers are responsible for providing design and implementation details, determining and investigating threats, and performing code reviews. QA/testers drive critical analyses of the system, take part in threat-modeling efforts, determine and investigate threats, and build white box and black box tests. Program managers manage the schedule and own individual documents and dates. Security process managers can oversee threat modeling, security assessments, and secure coding training.

For maximum test program benefit, the SSDL is integrated with the system lifecycle. During the business analysis and requirements phase, the security requirements are defined. Security requirements need to be defined involving all stakeholders, nomenclature needs to be communicated, and people need to be educated.

During the prototype/design/architecture phase, the security group supports this effort by conducting architectural reviews and threat modeling to point out any potential security holes.

Secure coding guidelines help you keep defects from getting into your code. They need to be adhered to during software development. White/gray/black box security testing techniques take place in conjunction with the integration and testing phase. System testing occurs after the first software build has been baselined. Determining exploitability occurs throughout the lifecycle and is finalized during the system development production and maintenance phase. Mitigation strategies need to be well thought out and implemented accordingly. Security program review and assessment activities need to be conducted throughout the testing lifecycle to allow for continuous improvement activities. Secure deployment considerations have to be implemented. Following security test execution, metrics can be evaluated, and final review and assessment activities need to be conducted to allow for adequate and informed decision-making.

Deploying the application securely should be done at the end of the lifecycle. Secure deployment means that the software is installed with secure defaults. File permissions are set appropriately, and the secure settings of the application's configuration are used.

Determining exploitability, vulnerability, and patch management is best done during the maintenance phase. If you think something may be a security issue during development, you should just fix the code to remove any doubt that there could be a security issue. But if the software is already deployed, a fix becomes very expensive. In that case, the techniques of determining exploitability should be used so as not to generate additional costs and work for your customers unless absolutely necessary.

Summary

Focusing on application security throughout the software development lifecycle is most efficient and is just as important as the focus on infrastructure security. In addition to the secure software lifecycle defined here, a good patch management process should be in place. No matter how good your secure software development lifecycle is or how solid your infrastructure security, be sure to patch your systems. It is a good idea to sign up for alerting services such as Deepsight so that you will know when vulnerabilities have been uncovered and when patches are available.

 



Print Page | Close Window