How the UK’s Code of Practice on IoT security would have prevented Mirai
The UK’s report on Secure by Design was released today after a significant amount of work from some of the best minds in government, academia and industry. This is one of the first major steps in the world by a government towards eliminating some of the bad practices that have plagued connected devices and services for many years.
Copper Horse’s CEO, David Rogers was the author of the UK’s Code of Practice for Security in Consumer IoT and services as part of its report on Secure by Design, in collaboration with DCMS, the NCSC, industry and academia. Here, David discusses how one of the major attacks on IoT, a botnet called Mirai, would have been prevented and its successors neutralised.
Security of devices and services is never just about one single measure. By building strength-in-depth, an attacker will find it extremely difficult to execute a successful, persistent attack that can affect millions of IoT devices.
Taking the infamous IoT botnet Mirai as an example, the Code of Practice provides multiple layers of protection against this attack, including the following:
1. Elimination of default passwords (guideline number 1) – Mirai used a list of 61 known default username and password combinations, encompassing millions of devices. Had these passwords been unique Mirai could not have worked.
2. Software updates (guideline number 3) – Many of the Mirai devices either were out-of-date with their patching or simply couldn’t be patched at all. This means that the spread of Mirai could not easily be halted. Had software patching been in place, devices could both be immunised and fixed. Most importantly, regular patching also protects against future variants of attack that exploit other vulnerabilities, neutralising their effect.
3. By following guideline number 6 in the Code of Practice on “Minimising exposed attack surfaces”, vendors would have prevented Mirai because the port it used to attack the devices would have been closed and therefore inaccessible. This is a good demonstration of the principle of “secure by design”.
4. Ensuring software integrity (guideline number 7) would have prevented arbitrary, remote code execution and support preventing things like authentication bypass issues. With no access to run code even if Mirai could have accessed a device, it couldn’t have done anything.
5. Designing a system to be resilient to outages (guideline number 9) means that if it is the victim of an attack like Mirai, key services will continue to operate, severely limiting the effect of the attack until it is dealt with.
6. Having a vulnerability disclosure policy (guideline number 2) allows these types of issues to be reported to vendors by security researchers and then subsequently addressed, prior to malicious exploitation. We want to ensure that vendors get the information about vulnerabilities from the good guys first.
You can see that design measures, if implemented can create the foundations that will reduce exposure to such attacks, allow pre-emptive protection for products once an attack is out in the wild and allow a response to an attack that is ongoing, whilst keeping users secure.
Security is a very difficult subject and there is no panacea to the security of devices, given that you are almost always dealing with an active adversary (sometimes clever automation in the form of AI and machine learning). This is why like many, I believe that the topic of security is more art than science.
In approaching this piece of work, we never set out to achieve a remedy for all ills because it simply isn’t possible. What we did do was take a long hard look at what the real problems are and what solutions need to be in place. Industry has already come a long way; a lot of vendors and service providers are doing a huge amount to make things more secure. Just look at the work of GSMA’s IoT guidelines which is now being adopted across the world, or the work of the IoT Security Foundation, or any of the following.
There are still a lot of vendors and startups who need a guiding hand or who wilfully ignore security for various reasons. This includes mobile applications controlling IoT devices which are often over-permissioned or which don’t implement internet encryption correctly. We looked at measurable outcomes. How would a retailer be able to check whether something was insecure? What things are easily testable by a consumer group? If someone tries to put something into a major retail outlet that is insecure, could it be caught before it was sold? In the future, would an organisation like Trading Standards be able to identify insecure devices easily? My own view is that we should be able to flush out the bad stuff from the system whilst encouraging innovation and enabling businesses to make IoT that is secure, privacy respecting and convenient for users.
Additional thoughts are on David’s blog: A Code of Practice for Security in Consumer IoT Products and Services