Consumers and businesses today have attained a certain level of trust regarding online activity and interactions. They don’t hesitate to provide financial information via their bank’s Web site, use Web applications to shop online, book flights via the Web, or access corporate Intranets to communicate sensitive internal information.
This has presented a new challenge for enterprises as they continue to build more complex applications to run their online businesses. Secure application development requires a constant balancing act between functional requirements and business drivers, deadlines and limited resources, and risk and flexibility. Success comes to organizations that build security into all phases of their application development lifecycle. By focusing on the security risks inherent in the application development process, and the possible risks to an organization’s customers, developers can apply these principles to any programming language or technology.
Why is application security so important for enterprises in today’s business environment?
Gone are the days when security breaches could be pushed aside and dealt with behind closed doors. Security breaches of all flavors have made front-page news since the beginning of 2005—several of which can be blamed on the insecure applications rolled out by organizations to enable online activities for customers and end users.
Consider the examples. Recently, alumni at a large university were notified that personal information stored on a server used by the school for fund raising could have been exposed to intruders. Similarly, hackers also compromised databases belonging to a well known online information provider, stealing personal information on large numbers of individuals.
At the same time, organizations are building more complex applications to run their online businesses, while consumers continue to entrust organization with very sensitive data.
What role does regulatory compliance play in application security?
Organizations are being forced to comply with an alphabet soup of regulations—everything from Sarbanes-Oxley (SOX) and the Health Information Portability and Accountability Act (HIPAA)—which means security is not just a business issues, but also a legal issue. The California Security Breach Information Act, for example, requires that state agencies and businesses that collect personal information from Californians promptly disclose certain types of security lapses or face harsh penalties.
Who should be concerned with application security?
Architects, developers, and project managers developing third-party applications should place emphasis on application security principles. However, vulnerabilities within custom in-house applications are more common and pose significant risk to sensitive information, such as consumers’ personal financial information.
What are the inherent security risks in the application design process?
Developers need to build in secure coding techniques, such as encryption, authentication, and passwords. Until recently, many of these techniques have not been taught in college classes for software developers. Therefore, many software engineers writing code are not educated about these techniques and are not aware of potential problems. For example, there are standard pieces of the C/C++ programming language that are insecure and should be used with great caution, or omitted from the process altogether.
What are some of the most common vulnerabilities in enterprise applications?
Common vulnerabilities can include authorization bypass, SQL injection vulnerabilities, buffer overflow, and information leaks and can affect both commercial and custom applications. Authorization bypass occurs when a normal user is able to access information from a Website or other application that was meant for an administrator or select group of individuals.
SQL injection is a technique for exploiting Web applications that use client-supplied data in SQL queries without removing potentially harmful characters first. There are quite a few systems connected to the Internet that are vulnerable to this type of attack. In this situation, data provided by a user, such as account number and username, is used to look up additional data on the SQL database. A knowledgeable attacker can provide SQL commands which get passed to the database and executed. The attacker can then inject commands and manipulate the database to do what it wants, such as providing user account information and details.
Buffer overflow is another example of a vulnerability that has plagued the commercial software industry and can also appear in custom applications. A buffer overflow occurs when a program or process tries to store more data on a buffer (temporary data storage area) than it was intended to hold. Since buffers are created to hold a limited amount of information, the extra data can spill over into adjacent buffers, corrupting and deleting the valid data held in them.
When do vulnerabilities find their way into the application design process?
Vulnerabilities typically find their way into applications during two phases of development—application design and application implementation. It is best to identify vulnerabilities during the design, rather than discovering issues during implementation and going back to re-design pieces of the application.
How can developers address security from the beginning of application development and design?
A holistic approach to building security into the development lifecycle will save tremendous amounts of time and money because problems are identified early in the process and continue to be addressed at each step. Security practices should be in place during requirements planning, design time, implementation, and testing time, in order to catch the majority of problems as early in the cycle as possible.
It is less expensive and less disruptive to discover design level vulnerabilities during the design, rather than discovering them during implementation or testing, forcing a costly re-design of pieces of the application. For example, if proper authentication of administrators is not built into the program from the beginning, it is much more time consuming and risky to fix during the final QA phase.
What is involved in the testing phase for vulnerability identification?
Application testing should be conducted by QA people who understand the importance of testing security as well as functionality. QA should apply security testing processes that test that the security features are working properly. Additionally, they should perform negative testing to determine how the application handles unexpected data such as long strings, special characters, and error conditions. QA should use a problem tracking system to prioritize security issues alongside other program defects so that security issues can be fixed just like any other program flaw. Common methods to test applications include load testing tools and tools that will generate input data for cross-site scripting, SQL injection, and buffer overflows testing.
What are other threats and countermeasures for mitigating application security risks?
Threat modeling and countermeasures are important steps in the secure development lifecycle, ideally done when the application’s design is near completion. Threat modeling is an exercise in which developers identify the assets or pieces of sensitive information that the application houses which need protecting. Countermeasures can be used to test the application to ensure it does not leave private information vulnerable to potential attackers. Input filtering—one example of a countermeasure—is a technique used by programmers to protect an application from attack by limiting the size and format of input to exactly what the application is expecting. For example, if an application is designed to accept a username that is all alphabet characters and a maximum length of eight characters, the application should reject all input that is longer than eight characters. This will help protect the application from performing unintended operations from unexpected input. Developers should also closely examine bandwidth, CPU time, and disk space in order to mitigate denial of service risks.
Developers should employ a thought process in which they imagine themselves as an attacker who knows everything about what the application can do. Then they should enumerate and categorize those threats to come up with ways to mitigate the risks. If there aren’t ways to mitigate the threats, the design should be changed and re-implemented.
What are common information leaks and how can hackers use information for malicious purposes?
Information leaks can also pose a threat to applications. A single information leak is often not a serious problem, but has the potential to provide an attacker with access to more serious vulnerabilities. For example, if a user enters a long string of text, rather than an 8-digit account number, a vulnerable application will come back with a string of information about what came back from the SQL server, often providing information to the attacker about what version of SQL is being used and how the system is constructed.
What does Symantec recommend for “best practices” for application development?
All software developers should be educated on the fundamentals of secure application development. Developers should also take a more holistic approach to application development, building countermeasures into the design process, as well as rigorous QA testing. While there is not one “silver bullet” for building secure applications, developers can employ multiple processes that examine vulnerabilities in different ways to ensure application security before production.
Do organizations need third-party validation for security of applications?
Some organizations need to comply with regulatory requirements, particularly in the financial services industry. If regulatory compliance is an issue, organizations should consider enlisting a third party for a penetration test, which will provide validation of the application’s security.