Skip to main content


MWR is a cyber-security consultancy that engages in ongoing software development, and has thus had to develop its capabilities both in software engineering terms and in terms of making the end products it produces highly secure.

This situation makes for interesting interactions between software engineers and security consultants, whereby the experience of each is able to polish the other.

It is from this combined base of knowledge that we have been able to understand how development practices can facilitate both an immediate, and long-term, improvement in the security posture of a product.

The product areas we have worked with make for a mix of solutions that focus both on the reinforcement of existing security mechanisms, as well as their potential abuses, all with the goal of facilitating and evaluating our client’s security posture.

The typical security-testing model

The typical role of a security assessment in the software development process is near the end, once most of the architecting and implementation have been completed.

Software can then be evaluated against attack vectors crafted against it by a specialist; the result of which may be the identification of the need for significant rework.

In our experience with multiple clients, the degree of security-awareness in development varies from poor to competent. Unfortunately for developers, security-awareness appears to be driven from a learn-on-the-job educational model, regardless of the mode of tertiary education present.

The degree and quality of feedback from a security assessment can help with improving security awareness in development, depending on the situation. It is this overall quality of assessment feedback that MWR takes great effort to deliver and improve in its consulting work; however, for the purposes of continuous software development within MWR itself, feedback needs to be in a tighter loop.

Having an effective security assessment near the end of a product deliverable is necessary if an accurate picture of the security posture of a product or solution is to be determined and is, at least, a necessary point of measurement regardless of the development methodology. This may seem like a very waterfall-model-inspired approach; however, it is born out of the fundamentals more than any particular development model, and may be seen in the same manner as product sampling for overall quality control.

The product lifetime aspects of such sampling are, however, more involved; questions arise of when and how frequently to sample, the scope of the assessment, other related systems, and the amount of time and resources applied to the assessment.

In general, it has been observed that for a given development process, the more infrequent the assessment sampling, the greater the risk of expensive change requirements later in the project’s lifecycle. The overall scope and resource cost of assessments can also be reduced when the number of such assessments increases.

Agile security controls

The questions of when to test, how much to test, and the scope of the testing have been an implicit topic for every software product developed at MWR.

We don’t assume that just because we have developed the product under a security brand name that we have somehow achieved a good security posture by some form of osmosis. These internal assessments take resources, and any negative results thereof may impact the project aspects of our products just as much as the consultancy’s assessments of our clients’ products may expose weaknesses that they will need to address.

It is this nature of project risk that has made MWR software development seek to address the security-aspects of development early-on in the project lifecycle, as well as integrate security-assessment within the regular business of day-to-day software development.

The need for continuous-development and value-delivery has further increased the need for effecting a frequent sampling of the security-posture of our products. This experience is, for the most part, universal and can be applied to many products and solutions in different situations.

The shifting of security to be a first-class concern of development objectives from the outset has been one approach to this problem. It is easy to simply state priority; however, implementing the necessary factors to achieve this goal requires buy-in from multiple organisational levels.

Project management functions need to understand the need for security-related activities to be given equal levels of precedence to product features. If this precedence is not aligned, the daily pressures on the development team will invariably push security to be something done later, thereby increasing change-related risk down the line.

The capacity of the development team to make good decisions for development purposes needs to be refined, which requires training not only in good development practice, but also in adversarial abuses of software and systems. This adversarial training will better equip the team to understand and apply the attacker mindset to continuously review, design and comment on software projects in order to harden them.

We’ll spare the obvious “Sun Tzu” quote in this instance. In order to be successful, development team leadership needs to be given a clear mandate that, despite not being a primary deliverable item, security concerns must be given the necessary resources and attention and sufficient organisational controls should be allocated to allow them to be adequately addressed.

Code review as a control

Training is all well and good; however, it is insufficient by itself if the necessary work processes are not present to allow for effecting correction of security-related problems in a tight time-frame.

Effective code review should be present as a matter of quality control for every development team; this is a well-established industry norm. Such a review process also provides for security review at multiple points in the code’s lifecycle, at a period when the context is most evident to both the reviewer and developer themselves.

In order to be effective, code review processes need to have a number of properties:

  • A sufficient level of familiarity and skill is required in the reviewers; typically, a combination of a more skilled person and a less skilled person provides a balance between review effectiveness, and the opportunity for building skills of the team as a whole.
  • The review should be performed in an active and problem-solving interrogational manner on the content of the code and the assumptions against which it has been developed.
  • The reviewer should evaluate the information and state being processed, while checking for the logical integrity of the mechanisms presented.

Over time, there is a tendency for review to become less careful, this must be avoided and a culture fostered that promotes the value of review for both the team and the product.

Static analysis as a control

While review goes a long way, it is insufficient by itself if the combined complexity of the problem and the tooling used to solve it are not under control. It is empirically evident that some tools are more effective than others for a given purpose; otherwise the industry would be happy writing web-services directly in assembly.

Most projects are not implementing some trivial aspect; they are solving a problem not already solved in a commodity product, service, or library. The product problem domain and the non-trivial nature of it thus comes with a certain degree of irreducible complexity that needs to be managed. Some degree of complexity is inherent in the problem being solved; this complexity for the most part can only be minimised to a certain degree.

The complexity involved in the particulars of the solution will vary with the technological landscape, and may be modified with respect to the number of relationships that the project has with other projects or systems. By allowing automated analysis of the logical components of a program, one can gain an increased understanding of the potential logical and security problems within the program or solution. Some have recently argued that such static analysis components are awkward in comparison with “just doing it”, however, these arguments ignore the risk aspects of the alternative.

Complexity does not always manifest immediately; however, on average and over time, it will introduce increased potential of failure or security vulnerability. The many potential signs of such vulnerabilities may be caught by tooling that is able to reason about the code as well as the interfaces implemented by it.

Automated aversarial testing

Static analysis can assist in identifying flows that may lead to security issues due to unexpected data, error cases that are not handled well, library functions that are deprecated, as well as other modes of sub-optimality. This analysis does not detect logical flaws and models that are prone to being abused, nor does it detect changes of a logical nature that result in security flaws being exposed or generated over the life-cycle of the product.

Automated testing can be leveraged to improve the security posture of a product in a number of ways:

  • Units of functionality at multiple levels should be provided with adversarial input in test cases, since if functional units cannot handle such input or maintain their state under such conditions, they are prone to introduce security issues in the wider scope of the application or system.
  • It is not reasonable to check input after a certain level of implementation detail; however, such limitations should be understood and documented within the software definition in order to be readily visible upon later changes.
  • Fuzz-testing can be highly effective in sampling the potential areas of abuse via unexpected input. Such testing may be complemented with techniques such as latin-hypercube sampling to gain an understanding of behaviour against inputs with a large potential variance.

Logging for responsiveness

If something does go wrong, it is important that the product has logs sufficient for the purposes of breach detection. There are unfortunately many situations in responses to incidents where the nature of logging is insufficient to make the activities of responders both quick and effective.

In order to be useful for blue-team purposes, logs need to be aggregated in a centralised manner, off-site from the systems generating them. The regular log level should be sufficient to track likely details of compromise, while maximising the retention of logging information before rollover. Logging should be structured and enriched with contextual information in order to facilitate categorisation and searchability, upon which anomaly detection can later be built.

Anomaly detection can be self-learning to a certain degree, and can provide vital improvement in time to detection for breaches before the attacker attempts to move laterally. Effective logging requires a level of definition and discipline as to what log levels mean, and a degree of massaging of the location and frequency of log statements. Effective logging practices facilitate a large improvement in the ability to debug issues that occur in all phases of development, especially in production.

Just as in life, there are little opportunities for do-overs, especially when it comes to the loss of data that is necessary for a quick remediation of a pernicious bug or breach.

Understanding your dependencies

There are few pieces of software that don’t have key dependencies sourced from third-party repositories; this makes dependencies a source of value for the development process, and also a source of complexity that needs to be managed in the review process.

Evaluating dependencies in depth can be arduous; however, it is quite feasible to cache the current dependencies in staging repositories for build processes, and make conscious decisions about when and what systems are updated. As much as it may seem convenient, automatic updates to the latest version of a code dependency can be more risky in security terms.

As an example of this, it is vital to isolate the namespace of internal packages from those sourced from public repositories, in order to avoid dependency-confusion attacks. MWR has delivered some telling demonstrations of such attacks, whereby a simple typo of a package name may lead to compromise of key developers, build systems, and the final software product. It is never a good thing for the sales of a company when they end-up shipping malware embedded within their products as a result of compromise of an upstream dependency.

Policy and knowledge lifecycles

Depending on the nature of the development team, developing and maintaining an effective team capacity that reduces the risk of security aspects during development may be challenging.

The effects of development team changes can be mitigated, to some extent, by codifying learnings in a knowledge-base that lives-on beyond these changes. Such documentation should be kept concise; being clear as to the purpose of the norms and standards being espoused. If this is done well, the learnings present in the knowledge-base may be referenced during review to effectively communicate security aspects without waste in comment duplication.

Management should also consider that development teams come from varied backgrounds, which, in turn, bring their own habits and patterns. Some of these patterns are less effective for security engineering purposes. Instilling an approach of defence-in-depth within the layers of the software and associated systems is as much a people-problem as it is a technical-problem, and, often, the most difficult persons to convince of the value of such endeavours are not the ones actually implementing the solution on a technical level.

Risk, as a concept, is by nature difficult to understand, account for and put a monetary value to; and, by nature, security engineering is difficult to quantify outside of a risk-based framework.

Wrapping it up

We have covered some adaptations to industry-accepted software engineering practices that lead to generally better security outcomes; however, it must not be assumed that implementing such is a substitute for final security-assurance. Rather, implementing these and other approaches reduces the risk of security assessments recommending significant re-work, the potential for security problems arising with future development, the likely severity of issues that do arise, and the effort it will take to remediate security problems at multiple levels.

Security engineering is ultimately sustained by effective technical management and self-management of software engineering teams. At the very least, we highly recommend fostering an environment that facilitates continuous-improvement of both the software being developed, as well as the people and practices involved in development.

Happy engineering; may your bugs be few, and your vulnerabilities fewer.