Penetration testing has become an integral part of many organisations´ security strategy, ensuring that their network defences against the outside world are regularly audited for their effectiveness. However, penetration testing is a very broad field, with different approaches taken by different practitioners and the styles of reporting varying widely, in terms of both information and format. In turn, this affects the way in which the results are interpreted and addressed. Having a correct understanding and measured expectations of the scope of a penetration testing project is also important.
For instance, organisations need to be aware of the differences between strict penetration testing (where the goal is to break in by any means possible) and a security audit, which a wider assessment of risk based on evidence obtained. Both approaches have their merits, but unfortunately, the two terms are often used interchangeably and this can result in disappointment. And, of course, this difference is reflected in the style of reporting.
For instance, it is common for security audit reports to have a greater level of detail. Another issue to consider is the importance of evidence and its interpretation. It may sound obvious, but there does need to be a clearly reasoned link between the results obtained and the recommendations given. In reality, this is not always the case. So, what constitutes “best practice” in a penetration test or security audit report? What should a company expect to find and what should they ask their penetration testing supplier to provide?
A report without a clear, logical structure is unlikely to have much effect. There really is no substitute for taking a textbook scientific approach to the overall structure of the report: it should have a summary up front and a clear separation between description of the context and the presentation of the findings. It is hard to avoid the term “management summary”, but this can be misleading, with its implication that all technical content should be removed – which is a bit tricky given a fairly technical subject in the first place.
It is better to view the summary as helping any reader decide whether they need to read the whole report – and giving them the key scope and results if they decide that they haven´t the time to go further. This means that the summary should be self-contained: in principle at least, the client should be able to tear it out of the report and treat as a useful document in its own right. The section on context serves two purposes. Firstly, it gives reference information that may be needed to interpret the findings given later on: for example, dates and times of testing may explain why some transient vulnerabilities could not be found.
Secondly, it sets the reader´s expectations of the outcome of whole testing project, given that there are almost always some practical or budgetary limitations on the work carried out. This should leave the findings section clear of background information that can obscure the important information: the reader should be able to pick up the issues without having to strip away excessive “boiler-plate” material. It is always a good idea for those considering commissioning a penetration test to ask prospective suppliers for sample reports. These should be assessed not just for the technical gems that they might contain but on whether they have a clear structure. After all, a muddled report may represent muddled thinking.
The Importance of Context
As mentioned above, it is important for both practitioner and client that expectations are set correctly. No penetration test can encompass all possible aspects of security and the boundaries between target and supporting or third-party systems are often fuzzy. To minimise the risk of the report being misused, the context section should give the:
1. purpose: a clear statement of the aims of the exercise including any caveats, with a short description of the system under test and a prima facie assessment of the threats
2. scope: a reference to the original proposal for the work, any relevant security policy documents, a list of targets and testing dates, any agreed constraints plus restrictions due to force of circumstances
3. methods: a list of tests carried out, complete with an indication of their purpose – the areas that they address and the types of issue that they may uncover
4. external frameworks: if any generic methodologies or other external procedures have been employed then these should be referenced explicitly
5. tools used: although there is more to penetration testing than running tools, it is important to specify which tools were used, with versions and updates and, in some cases, details of modules and options selected
6. personnel: including the team leader, report author and other staff involved
This can amount to many pages even for a short external penetration test, but if it is laid out correctly the reader can gain a general understanding of the background without reading every word.
Before getting into the detail of specific security vulnerabilities, a test report should note any general findings that may be of interest to the client. At the very least it should give an explicit list of the systems and services found: it may also contain details of functional anomalies and performance issues. It would be wrong for a penetration test to place too much emphasis on matters outside the security domain, but there may be aspects of the overall service that are not visible to anyone else.
Substantive security issues should be addressed one at a time. Unfortunately, this is not as easy as it seems, as teasing out the basic issues from the initial results is not a trivial job. Firstly, it is necessary to eliminate redundancy and avoid reporting the same underlying flaw more than once under different names. Then, to provide real value, consequential vulnerabilities have to be consolidated into one top-level issue. Finally, if the target consists of a number of individual systems then there may be a general pattern to be identified – hundreds of vulnerabilities detected across an office network may simply indicate that the standard desktop build needs attention. All in all, attaching a useful and relevant label to each security issue is not a simple task, but a good penetration tester should be able to rise to the challenge.
Each real issue should be reported in a consistent manner: issue: a clear description of the vulnerability using, if possible, a recognised name; severity: an initial assessment in terms of “must fix”, “should fix” or “could fix”; evidence: a statement of the tester´s reasons for believing that the vulnerability exists, with an appropriate level of detail; references: pointers to relevant and up-to-date external information sources; actions: recommendations for resolving the issue in the current context
Perhaps the most difficult aspect of penetration testing or any other form of security audit is in determining the severity level. After all, there will be much that the members of the testing team don´t know about the system under test, even during an extensive on-site testing assignment. In particular, they can´t be expected to understand the importance to the client´s business of the system and the data that it holds, especially if there is no formal security policy to underpin the whole exercise. However, by clearly identifying the vulnerability and possible fixes they can give the client as much help as possible in putting it into context.
More on Evidence
The expectations of clients who commission penetration testing and security auditing has changed in the ten years since the Internet first gained a significant business presence. In the early days, the subject was surrounded in mystery and network managers were more prepared to trust the technical experts that they were paying to evaluate their systems. But times have changed. Most clients are much more security savvy – if only because they have their own home Internet connections to worry about – and some may have had bad experiences with less diligent testers who simply run automated tools and print out the reports that they generate. In particular, they want to see firm evidence for any claims made. This does not mean that the report itself should pages and pages of network traffic dumps, but that the tester can show clear reasoning based on results. This usually means that the vulnerability is shown to be repeatable, so that the effectiveness of any remedial work can be tested easily.
Experienced report writers will bear in mind that the deliverable may go – indeed, should go – beyond the manager who commissioned the work and will be subject to scrutiny by all manner of technical staff. In the real world, some of these readers may be sceptical if not hostile to the exercise, and if the report does not stand on its own merits then it will achieve little. Although internal company politics is not the direct concern of the penetration tester, providing the client with an effective means of actually getting something done is in everyone´s long-term interest.