Print Page | Close Window

Software Security Testing.......

Printed From: One Stop Testing
Category: Types Of Software Testing @ OneStopTesting
Forum Name: Security Testing @ OneStopTesting
Forum Discription: Discuss All that is need to be known about Security Testing, All Security Issues and its Tools.
URL: http://forum.onestoptesting.com/forum_posts.asp?TID=88
Printed Date: 29Dec2024 at 2:18am


Topic: Software Security Testing.......
Posted By: Riya
Subject: Software Security Testing.......
Date Posted: 17Feb2007 at 10:43am

Software Security Testing

        “Software is the fuel on which modern businesses are run, governments rule and societies become better connected. Software has helped us create, access and visualize information in previously inconceivable ways and forms. Globally, the breathtaking pace of progress in software has helped drive the growth of the world’s economy. On a more human scale, software-intensive products have helped cure the sick and given voice to the speechless, mobility to the impaired and opportunity to the less able. From all these perspectives, software in an indispensable part of our modern world.”1

        As these advances take hold, it seems that software security is not keeping pace. Despite new technical standards, such as WS-Security2 and modern replacements like AES3 for aging cryptography algorithms such as DES, the security problem appears to be getting worse, not better. To make matters worse, according to research by the National Institute of Standards (NIST), 92% of all security vulnerabilities are now considered application vulnerabilities and not network vulnerabilities. 

        Why are so many people getting software security so wrong, and what is needed before things improve? Like any good software analyst does, it helps to look at the root cause of the problem before suggesting better ways of working. In particular we are going to introduce a better way of testing Web software security from OWASP—The Open Web Application Security Project  .

        Over the last decade, a vast security industry grew out of an earlier, more innocent hacking community. The early predominantly academic community had access to the first Unix derivatives, and in an effort to explore the boundaries of the new technology, created extensions and “hacked” the code to get it to do what they wanted. The emerging computer networks, like telephony systems, were places to discover how technology was being used in practice and, in doing so, enthusiasts started sharing tricks about ways to do things they thought were cool. These garage meetings and underground cultures spawned hackers to use handles like “Captain Crunch” and “Condor,” and a subculture of exploit sharing ensued.

        The mainstream security industry has followed the mass technology adoption path from Internet networking through to the mainstream emergence of commercial operating systems and applied many of the same tricks from those early days. If you walk around Las Vegas during the annual Defcon hacker convention, you could be forgiven for thinking that very little has changed from those early days (apart from commercialization of the subculture and the adoption of games like “Hacker Jeopardy,” where girls take their clothes off in response to the audience answering technical questions). While a small portion of the mainstream hacking crowd today has their roots in developing C code, the majority use automated security scanning tools or exploit code written by the elite few.

        While the system admin skills translated relatively well to network and operating-system security where functionality was predetermined or at least bound, as we evolve into solving the new generation of security problems, we need a different approach and a different skill set. The first generation of vendor tools was called application security tools, the connotation being that once the software is built and compiled, the problems are instantiated into applications.

        Fundamentally we have to accept that if software security is the problem, then the solution lies in building secure software. Building secure software is very different from learning how to securely configure a firewall. It must be baked into the software, and not painted on afterwards.

        It can be helpful to think of secure software development as a combination of people, process and technology. If these are the factors that “create” software, then it is logical that these are the factors that also must be tested. Today, most people just test the technology or the software itself. In fact, most people today don’t test the software until it has already been created and is in the deployment phase of its life cycle (that is, code has been created and instantiated into a working Web application). Instead they apply penetration-testing techniques after the application has been posted to the Internet. This is generally a very ineffective and cost-prohibitive practice. Grady Booch, software modeling author and contributor to the Unified Modeling Language (UML), describes this approach as a case where treating the symptoms doesn’t cure the disease. This is very true of software security. The security industry must adapt and learn the hard lessons that the software development community has realized over the past decade. Just as software engineers have learned that you don’t engineer software the same way you engineer a skyscraper, the security industry also must adapt if it is to survive.

        An effective testing program should have components that test:

People—to ensure that there is adequate education, awareness and skill.

Process—to ensure that the team has policy, and that the people know how to follow the policies and what the appropriate and effective techniques and processes to do that are.

Technology—to ensure that the process has been effective in its implementation and leverage the appropriate security features in development languages and frameworks.

Unless a holistic approach is adopted, testing only the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. People have to test for the disease and not the symptoms, and when they find they have the disease, they must prescribe both a short-term and long-term prescription to manage it. By testing the people and, most importantly, the process, you can catch issues that would later manifest themselves into defects in the technology (symptoms), thus eradicating bugs early and identify the root causes of defects.

        Likewise today, people are usually only testing some of the technical issues that can be present in a system. This results in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of the Corporate Information Security Group at Fidelity National Financial, presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York.

        “If cars were built like applications…safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact and resistance to theft,” Verdon stated at the conference.

        Many organizations have started to use Web application scanners. These black-box scanners attempt to crawl across Web sites and find security bugs acting like an automatic hacker. While they may have a place in a testing program, we don’t believe that automating black-box testing is, or will ever be, effective. It should be one of the last things an organization turns to test the problem, not the first. We are not discouraging the use of Web application scanners. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.

        As an example, we once reviewed a Web site where the designers created an administrative backdoor that was left in the production application. When you click on a link or send user input forms to the Web site, you send in parameters. When you get the weather for your local area from www.weather.com, for instance, you may be sending your zip code as parameters. In the application in question, the developers had written the site in such a way that if you sent in a seemingly random set of parameters that was 40 characters long, you were automatically logged in as the administrator. The only way for a Web application scanning tool to have found the hole was for it to have guessed the exact code from literally billions of options; something that is unfeasible. It would have taken several hundred years in this case.

        However, even for a non-programmer looking at the source code, it was obvious that this flaw existed. The logic followed the basic flow of:

        if input from browser = “the special 40 characters” then log user in as administrator

        One malicious user could have sent a hyperlink out to hundreds or thousands of people on a public mailing list, and everyone would have been able to have total control of the site and the businesses’ client accounts.

        This type of example is all too common. Other similar situations have occurred in which developers have created their own encryption routines that, to the untrained eye, passed as garbage but to the basic security analyst was trivial to decrypt. One such application I tested contained the account number from which to debit mortgage payments—quite a shock for a customer who finds he is paying a stranger’s loan. These types of issues cannot be practically and effectively found using black-box scanning techniques.

        Testing has shown that the current crop of Web application scanners find fewer than 20% of the vulnerabilities in a common Web site. That leaves about 80% in production code for the hackers to find and exploit.

        As Gary McGraw, author of “Building Secure Software,” puts it so eloquently, “If you fail a penetration test, you know you have a really bad problem; if you pass a penetration test, you have no idea you don’t have a really bad problem.”

        To make matters worse for the development community, the hacker community has long loved the use of technical jargon to describe security issues. A recent group of application scanning vendors, whose members include those who have published holes in Hotmail and other live Web systems,  have recently banded together to create a taxonomy of terms with the objective of avoiding jargon. Ironically, the results of their work include terms like “Insufficient Anti-automation,” “Abuse of Functionality” and “OS Commanding,” which only exacerbates the problem with this “jargon-busting jargon.” It is all too easy to see why the software industry is having a hard time taking advice from the security industry.  

        Luckily, the OWASP has attracted the support of many development-focused practitioners and garnished the support of IT security executives at large companies, such as Fidelity, and is slowly molding the mainstream perception of how to address software security.

        Verdon of Fidelity National says, “The security community still hasn’t caught up with the challenges faced by the application-development community and is, frankly, leaving many of them still not knowing what questions to ask. Sure the architects know J2EE’s and .Net’s security frameworks, but nobody—up until recently, that is—was giving them a good set of requirements to work with. That was the security industry’s failing. The people that make up OWASP realize this and are beginning to bridge that gap. OWASP members are helping to build the common language we need to model security issues and their solutions accurately.”

OWASP Testing Project

        As part of a big vision to provide a cohesive set of documentation and supporting software, one project paving the way is the two-part OWASP Testing Project. Part One of the project—to introduce a task-based, security-testing framework—will help organizations build their own testing process. Part Two supports the framework with detailed technical descriptions of how to implement the high-level process, including how to examine code for specific issues. Part One is due for release in September 2004, and Part Two early in 2005. As with all OWASP projects, the documentation and software is open source and free.

        This task-orientated testing framework consists of activities that should take place:
  • Before Development Begins
  • During Definition and Design
  • During Development
  • During Deployment
        Before application development has started, the framework recommends testing to ensure that there is an adequate Software Development Life Cycle process and that security is inherent to the process. It recommends testing to ensure that the appropriate policy and standards are in place for the development team, and that the development team creates metrics and measurement criteria. These concepts should be nothing new to development teams that adopt best practices,  such as the Rational Unified Process originated at Rational Software (company acquired by IBM in 2003).

        Next to ensuring that security is an integral part of the software development life cycle itself, ensuring that there are appropriate policies, standards and documentation in place is a must. Documentation is extremely important because it gives development teams guidelines and policies that they can follow: People can only do the right thing if they know what the right thing is. If the application is to be developed in Java, it is essential that there is a Java secure-coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process, and security implementation risk will be reduced.

Testing Requirements is Essential

        During definition and design, the testing works heats up in earnest. Security requirements define how an application works from a security perspective. It is essential that security requirements be tested. Testing in this case means testing that the requirements exist in the first place, and testing to see if there are gaps in the requirement definitions. For example, if there is a security requirement that states that users must be registered before they can get access to the white papers section of a Web site, does this mean that the user must be registered with the system and assigned a role? When looking for requirement gaps, consider looking at security mechanisms such as:
  • User Management (password reset, etc.)
  • Authentication
  • Authorization
  • Session Management
  • Transport Security
  • Data Protection
  • Data Validation
        Applications should have a documented design and architecture. By documented we mean models, textual documents and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture enforce the appropriate level of security as defined in the requirements.

        Identifying security flaws in the design phase is not only one of the most cost-efficient places to identify flaws, but also can be one of the most effective places to make changes. For example, by being able to identify that the design calls for authorization decisions to be made in multiple places, it may be appropriate to consider a central authorization component. If the application is performing data validation at multiple places, it may be appropriate to develop a central validation framework (fixing data validation in one place, rather than hundreds of places, is far cheaper). Security design patterns are becoming increasingly popular and useful at this stage. If weaknesses are discovered, they should be given to the system architect for alternative approaches.

        UML models that describe how the application works are very useful for security. In some cases, these may already be available. Use these models to confirm with the systems designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to the system architect for alternative approaches. There are specialized UML dialects for modeling security such as SecureUML.4

        Armed with design and architecture reviews, and the UML models explaining exactly how the system works, undertake a threat-modeling exercise. Threat modeling has become a popular technique of looking at an application from an adversary’s perspective and understanding what he or she would seek to exploit and what countermeasures are in place. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these threats have been mitigated, accepted by the business or assigned to a third party, such as an insurance firm. When identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify the design.

        Theoretically, development is the implementation of a design. However, in the real world, many design decisions are made during code development. These are often smaller decisions that were either too detailed to be described in the design or requirements or, in other cases, issues where no policy or standards guidance was offered. If the design and architecture was not adequate, the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be faced with even more decisions.

        A security team should perform a code walkthrough with the developers and, in some cases, the system architects. A code walkthrough is a high-level walkthrough of the code where the developers can explain the logic and flow. It lets the code review team obtain a general understanding of the code, and it lets the developers explain why certain things were developed the way they were. The purpose is not to perform a code review, but to understand the flow at a high level, and the layout and structure of the code that makes up the application.

        Armed with a good understanding of how the code is structured and why certain things were coded in a specific way, the tester now can examine the actual code for security defects.

        Having tested the requirements, analyzed the design, developed threat models and countermeasures and performed code review, it might be assumed that all issues have been caught. Hopefully, this is the case, but penetration-testing the application after it has been deployed provides a last check to ensure that nothing has been missed. The application-penetration test should include the checking of how the infrastructure was deployed and secured. While the application may be secure, a small aspect of the configuration could still be at a default install stage and vulnerable to exploitation. This is where automated black-box scanners are actually effective, looking for the known holes through poor configuration management. Remember, if you find a security bug in the application at this stage it will be expensive to fix and, in some cases, such as user management systems, it will not be possible without huge design changes.

        A successful testing program tests people, process and technology, despite several vendors offering to sell you “silver bullets” that simply don’t exist in the software security-testing world (nor the protection world, but that’s saved for a follow-up story).

        A good testing program engages a security team that is predominantly made up of the development team itself and not the IS department, who rarely has development skills or experience. Good teams place a higher emphasis on testing the definition and design stages of a life cycle with less emphasis on deployment.



Print Page | Close Window