Security through obscurity has been a controversial topic in cybersecurity since its inception. It is the practice of obscuring code or information so that it is more difficult for an attacker to investigate (enumerate) it. This can be done in many ways such as not broadcasting the SSID of a Wi-Fi network, or moving the port of a service to another less common port number to attempt to evade scanners. However, the reason this remains a controversial topic in cybersecurity is because it does not fix the root cause of vulnerabilities in a system (for an explanation of root causes – see https://www.mwrcybersec.com/the-root-of-all-e-vulns), rather it simply makes them more difficult to find and exploit. This blogpost aims to explore the current discussions on some of the benefits and detriments of security through obscurity by exploring the effect that it can have on systems, zero-day vulnerabilities, usability, and development effort.
A Historical Look at Security through Obscurity
The value of security by obscurity is by no means a new debate, with one of the earliest documented debates we could find around it (and the ethics of vulnerability disclosure) first arising in 1851, with the locksmith Alfred Charles Hobbs. Hobbs demonstrated how supposedly state-of-the-art locks at the time could be easily bypassed by criminals. This led to concerns arising from the public about how exposing the inner mechanisms of these these locks could result in an increase in lockpicking incidents. Some felt, therefore, that keeping the designs of the locks secret would prove beneficial instead. However, when considering the bigger picture, if Hobbs had kept the designs secret, it would not have prevented them from being picked and it would have lessened the awareness around the insecurity of these locks. Had Hobbs instead opted to maintain the secrecy of the lock designs, he would have been employing the principle of security through obscurity. He would be obscuring the design of the locks but not remediating any of the vulnerabilities actually present. In response to this discussion, Hobbs responded with:
“Rogues are very keen in their profession, and know already much more than we can teach them.” 
A Modern Analysis
Almost 200 years later, the topic of security through obscurity has remained relevant to the cybersecurity space. There are many arguments both for and against the use of security through obscurity and the different aspects of it, but many agree that it should not be used as a primary security measure. It is important to note that obfuscating systems without addressing the underlying root cause of any vulnerabilities present does not address the inherent issues within the system. Instead, it only makes it potentially more difficult for attackers to find and exploit those vulnerabilities. In essence, the system retains the original vulnerabilities with the addition of a layer of obscurity. As seen above with Hobbs, keeping the design secret would not have made it more secure, rather it would have more likely added some difficulty in an attacker finding the correct method to pick the locks. In this case, if the security of a system relies heavily on obscurity impeding an attacker’s attempts at enumerating it, they would only need to discover a means to unveil the obscurity in order to exploit the system. It’s the cybersecurity equivalent of hiding a secret room behind a bookshelf – while this may thwart basic attempts at discovery, a more thorough attacker will not be deterred. However, there are certainly potential benefits to obscuring systems as a defense-in-depth measure, camouflaging your digital estate as one singular layer of a comprehensive defensive strategy.
When a sophisticated threat actor conducts an attack, they generally follow the cyber kill chain, which always starts out with the process of enumeration and target designation and is potentially the most important step in any successful cyber attack. While security through obscurity in this sense can provide a layer of protection to significantly increase a threat actor’s time and resource expenditure when enumerating a system, it does not fix any existing gaps within the system’s security model. Cybersecurity threats often take the path of least resistance and a sufficiently obfuscated system may result in a threat actor abandoning their pursuit in favour of easier quarry. Enumeration and the selection of targets is often a time consuming process for an attacker. If other, less obscure systems are present an attacker may instead choose to focus their efforts on enumerating those instead. In this sense, security through obscurity does serve to act as a basic deterrent to attackers. Defending a digital estate is certainly easier if there is less noise from opportunistic threats. There is no guarantee of this, however, and a determined attacker should always be assumed to have infinite time and resources when conducting an attack. As mentioned before, the addition of obscurity to a system should always be used as a defense-in-depth measure rather than in isolation.
A further benefit of security through obscurity is potentially increasing the resilience of a system against exploitation of zero-day threats. A zero-day vulnerability is a vulnerability in a computer system that was previously unknown to its developers or anyone capable of mitigating it, and is therefore unpatched. In the event that a catastrophic zero-day vulnerability releases, history has shown that automated attacks will attempt to leverage it against anyone who may be vulnerable. This is normally done by performing mass scans across the internet to look for vulnerable ports or services. It is unlikely that automated attacks will perform full enumeration across a target due to the time requirement for this and the potentially large range that they are scanning. Therefore, obscuring a service by migrating the port it is running on to a non-default port number could, in theory, mitigate hasty automated scans from discovering it. Due to this, obscurity can therefore prove to be a useful method of adding a security layer to mitigate otherwise indefensible zero-day threats. In this sense, obscurity could offer tangible and practically realisable benefits when combined with other layers of security.
However, this could also come at the cost of usability. Altering aspects of a system, such as port numbers, may negatively impact user experience as well as add additional development overhead. Moving services such as SSH to odd ports may cause user confusion or otherwise negatively impact their experience. In addition to this, integrating obscured systems with existing technologies may require developing a compatibility layer between them. This can end up working against cybersecurity principles, as it is fundamentally about finding a balance between user impact and the practical security of systems.
A counterargument to the use of security through obscurity is considered under the lens of Kerckhoffs’ principle, a law of cryptography first stipulated by Auguste Kerckhoffs in the 19th century. During this time, information was commonly obscured using techniques such as steganography, which is the concealing of information within another message or object to avoid detection. Kerckhoff disagreed with this approach and his philosophy was more aligned with that of open security. Open security, which could be thought of as a modern extension to Kerckhoffs’ principle, argues that applying open source philosophy to developing secure systems has a greater benefit than security through obscurity. One way of phrasing Kerckhoffs’ principle is:
“One ought to design systems under the assumption that the enemy will immediately gain full familiarity with them”
In essence, this principle specifies that a system should remain secure even if the entire system’s internal workings are open to analysis.
From a penetration testing perspective this is particularly relevant. Assessments are often time-boxed or limited in scope, and while obscuring a system can make it notably more difficult for an attacker to target the system in the wild, it could have adverse effects on the efficiency and value of penetration tests, should the underlying system not get comprehensively assessed. Penetration testing aims to discover exploitable security vulnerabilities and potential attack vectors that could be abused by malicious threat actors targeting an organisation. The results of penetration tests can provide an organisation with a snapshot view of the security posture of their systems, what the likely attack vectors are, and what impact a potential breach could have based on the discovered vulnerabilities. Penetration testers, unlike real-world attackers, do not have an unlimited amount of time to dedicate to targeting a system. Therefore, time spent needlessly navigating through a layer of obscurity is time that could be better spent on ensuring that the underlying system is secure, with or without the added layer of obscurity.
So… what’s the verdict?
The essence of the previous arguments against security through obscurity is related to the fact that some may consider it to be more trouble than it is worth. After all, if your system is secure anyway, why go through substantial effort to add a vague additional layer of security that may impact your own organisation negatively, while not directly contributing to the underlying security? The reality is that maintaining security is already a difficult and constant effort, even without added complexities. Threat actors and cybersecurity specialists are in a constant tug of war in trying to gain an upper hand. Some organisations may feel that a degree of obscurity can help give them an edge over attackers, but this should never come at a significant cost to usability and shouldn’t be used as a replacement to remediation of underlying vulnerabilities. If a layer of obscurity makes it more difficult for an attacker to enumerate a system, why not add it? Slowing down the initial aspects of the cyber kill chain, such as reconnaissance and weaponisation, can grant IT professionals valuable time in the event of a zero-day vulnerability being exploited, or impede the progress of threat actors that have already compromised a system. However, nothing is free, and that layer of protection or obscurity will almost always cost the effort of someone at your organisation; effort that could be spent on patching real security vulnerabilities. The decision of whether to obfuscate systems should be context-dependent and it should never replace development time that could instead be allocated to practically hardening systems. Obscurity should be used as a defence-in-depth measure that is implemented with low priority to provide an additional mitigation against threats once real remediations have been prioritised and implemented.
 – https://danielmiessler.com/p/security-by-obscurity/
 – https://csrc.nist.gov/pubs/sp/800/160/v2/r1/final
 – https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html
 – https://link.springer.com/article/10.1007/s10458-021-09522-w
 – https://archive.org/details/securityengineer00ande/mode/2up
 – https://archive.org/details/bstj28-4-656/page/n5/mode/2up?view=theater
 – A.C Hobbs (Charles Tomlinson, ed.), Locks and Safes: The Construction of Locks. Published by Virtue & Co., London, 1853 (revised 1868)