And yet this world is poorly controlled, poorly understood and designed to abstract complexity away from users to such a degree that they increasingly are either unable or unwilling to participate in active choices about the security of their systems, applications and data. As a result the IoT has been the breeding ground for an entirely new scale of DDoS attack (1+TBps Mirai attack on OVH 2016), and has seen the increase of cyber-attacks on previously air-gapped SCADA-based industrial systems increased by 100% from 2013 to 2014, with 245 industrial incidents raised in 2014 in the USA alone.
The IoT is the central nervous system that enables the delivery of cyber-physical systems. And by their nature CPS applications have direct and measurable impact on our experience of the lived-in world. They do not stop at the screen – they cross the blood-brain barrier into our lived, physical existence. When industrial machinery is compromised by trojan horse applications that render tolerance parameter measurements unreliable, then millions of Euros of inventory has to be re-certified or scrapped. When autonomous car driving systems fail to accurately sense hazards in their field of operations, whether through error or infiltration of malicious code, then people are at risk of injury or even death. When home security systems and home heating systems are compromised to reveal data about home occupancy, then individuals are at higher risk of crime.
International responses have had limited effect, but are tightening
As an emerging technology space the international response to securing and regulating the IoT has been relatively light-touch. In 2016 the UK’s Cyber-Security Regulation and Incentives review from the National Cyber-Security Centre defined a set of “secure by default” guidelines for the IoT, but the implementation of them remains at the level of steering guidance rather than regulatory enforcement. Similarly, in the US in 2016, strategic principles for securing the IoT were established, but issued as non-binding guidelines. The good intent from international authorities has been to allow for the development of self-regulation from industry, led by market-dynamics. However a combination of pre-existing regulatory environments, application sectors that are themselves developing or already operate specialized regulations (connected cars, smart healthcare), the global nature of the IoT supply chain and the commercial dynamics of the hardware markets IoT services rely on have resulted in the development of a highly complex, overlapping and confusing proliferation of de-facto standards developed by commercial consortia, standards bodies and authority agencies. Consolidation across these initiatives has been painfully slow, with the result that adoption is piecemeal and smaller manufacturers complain that the costs and time taken to work out what standards to work to are prohibitive.
In response, international authorities are tightening their guidelines, and re-enforcing their approaches by requiring compliance as a part of public procurement qualification. In 2017 the US Congress approved the “IoT Cybersecurity Improvement Act” which imposed minimum cybersecurity operational standards for internet-connected devices for Federal Agencies. In September 2017 the EU proposed that ENISA become the formal cybersecurity agency of the European Union. The two regulatory authorities are converging towards a set of minimum security standards at both device and system level, harmonized between the EU and the US. However, implementation is still by non-binding recommendation and public procurement which doesn’t resolve issues of standardization in consumer device purchases.
The IoT domain is extremely challenging to the emergence of formalized standards. Any approach needs to address intersectionality across:
- Topology – device, edge, connectivity, services
- Application area – consumer goods, critical infrastructure, industrial and commercial services
- Specific application area regulations/standards – smart utilities, connected transport, smart health
The market is harmonizing towards common categories for standardization, but the multiplicity of “standards” and kitemarks/certification processes makes it difficult for manufacturers to choose their appropriate route, and for purchasers to understand what assurance is being offered.
Users’ cyber-security awareness is dulled by abstraction
The IoT cyber-security context is complicated even further when you consider what human interaction with an IoT enabled world actually means. As recipients of services in an IoT enabled world our interaction is often more implicit than explicit. The IoT exists to abstract us away from the complexities of interactions between ourselves as inhabitants of a “smart” world and the data-flows and events that enable the seamless integration of applications that we crave. How can users who experience events without specific agency be enabled to make secure choices about their data and contextual information in this pervasive environment? And do we have any evidence that given the choice, they would actually do so?
Cyber-physical systems include:
Computational |
Physical |
Human |
elements, and all are potential points of weakness for malicious attacks.
Does better security come from better systems that ultimately seek to eliminate the agency (and unpredictability) of the human element?
The truth is that by attempting to eliminate user error the transfer of risk is often to another human component of the system – software developer, decision scientist, engineer, trainer – sometimes with catastrophic results. Indeed, systems that too effectively “hide” complexity from users may exacerbate failure due to unwarranted confidence in the “machine”.
In their 2017 research on “Smart Cyber-Physical Systems” Craggs and Rashid call for the development of “security ergonomics” for the IoT. This is a practice which understands
“the interactions among humans and other elements of a system…. in order to optimize human well-being and overall system performance”
Their research leans heavily into the “human in the system” safety dynamic used across the aerospace industry – and this is a clear example of where the cyber-engineering profession can learn from the physical-engineering domain that has preceded it. There are three predominant models for accommodating human agency in safety design in aviation.
- The SHEL model (also SCHEL), Hawkins – 1970s
Software, (Culture), Hardware, Environment, Liveware (humans)
- The Swiss Cheese model, J. Reason – 2008
Describes how critical and uncontrolled failures occur when both an active failure (an unsafe human action) and a latent failure (an underlying system failure) coincide. For example – in the case of the Mirai botnet attack in 2016 the active failure was that users did not change their device default passwords, the latent failure was that systems did not allow for security patches or firmware upgrades.
- HFACs (Human Factors Analysis and Classification), Shappel and Wiegmann – 2000
Organisational influences, Unsafe supervision, Preconditions for unsafe acts, Unsafe acts
When considering security ergonomics for the IoT these models of the human factor in safety violations could prove invaluable for application designers and software engineers.
IoT cyber-attacks are unevenly distributed and can be modelled
In the industrial IoT asynchronous security provision across supply chain members provides a uneven threat surface. Ninety-two percent of cyber incidents stem from the smallest and thus least well protected members of a supply chain. Not the OEM but the tier 3 and 4 suppliers.
- In 2014, Dragonfly targeted the industrial control systems (ICS) of energy companies, by compromising the websites of ICS software providers and uploading malicious code that gave Dragonfly remote access to industrial systems.
- The Shylock banking trojan attacked legitimate websites through website builder code used by digital agencies resulting in users being directed to a malicious Shylock site where malware was downloaded to their systems in the background
- Botnet attacks in 2013 targeted not the prime players in a supply chain, but the third-party data stores to find critical mass of data assets in a supply chain and stage targeted infiltration attacks to maximise impact – exactly the approach used in the Chtrbox Instagram hack which exposed 49 million records of “influencer” individuals.
Part of the problem is that the most secure profile for data storage – distributed, fragmented and asynchronously controlled – renders the data difficult to analyse or work with in real-time. To work with data distributed across device and cloud requires computational intelligence in data retrieval and feature extraction that can deliver efficiencies in both processing and memory to allow for real-time event generating analysis.
And this requirement for significant increased in processing power and memory capacity comes alongside an increasing volume of cyber threats. Attacks specifically targeting SCADA industrial control systems rose 100% in 2014 vs 2013 in Finland, the UK and the USA. In 2014 the US Industrial Control Systems Cyber Emergency Response team responded to 245 incidents, with Energy and Critical Manufacturing companies at the top of the list.
Researchers at Cranfield University in the UK are calling for an artificial intelligence immune system for the IoT (Li et al, 2004), believing that data-driven cyber security systems have specific value to add in:
- Multi-modal cyber-authentication
- Detecting malicious footprints
- Encryption
Data-driven, predictive methods require international collaboration
At the ICPP3 event in 2017, a team of researchers presented the paper “The role of transnational expert associations in governing the cybersecurity of the IoT”. They asserted that an effective international cybersecurity governance model cannot rely on traditional points of authority (standards and regulations) and instead needs active information gathering and monitoring – for example through the Message, Malware and Mobile Anti Abuse working group (M3AAWG), and the Anti-Phishing working group (APWG). But to monitor breaches effectively its worth considering why IoT products have such low levels of security, and therefore why traditional governance models won’t work.
IoT products are:
- Low margin – with vast global supply chains where connectivity is a sales and profit driver in a context of extreme price competition
- Operators within these supply chains have very limited experience of valuing or incorporating security protocols in their products
- OEMS find it extremely hard to push security standards into their supply chains
As a result, massive-scale attacks in the IoT have a number of specific characteristics:
- They have a higher utilization opportunity of permanently switched on, connected things to harvest
- Infections are more durable because devices have limited security features and vulnerability management
- Greater contamination rate due to the increasingly networked environment
There is some irony in the reflection that the success of the Mirai botnet, which triggered the first TBps DDOS attack on OVH, triggered by devices from 164 countries may itself reduce the scale of future IoT botnet attacks due to the resulting competition from malicious actors to recruit IoT devices! More attacks, but with fewer agents being able to be co-opted at any one time.
IoT devices are a rich harvest for botnet attacks because they are:
- Always on
- Have no antivirus support and/or limited update mechanisms
- Do not have fail-safe mechanisms that allow them to drop their internet connection but still deliver baseline performance
- Are computationally underpowered but have generous raw network connectivity
- May not demonstrate any performance drop when compromised
The market moves too fast and is too globally complex for a regulatory approach to work effectively. Perhaps instead we need the IoT equivalent of an intelligence network, using the anti-abuse communities to establish a greater role in tracking and reporting abusive activity in order to create system-level resilience?
In the end, why does it matter?
Ultimately, one of the most problematic factors in IoT security for consumer products is that the impact of malicious activity isn’t generally felt by the user whose device has been compromised. It’s hard to work up a sense of security awareness about IoT botnets when they aren’t stealing my data, they are just making life hard for some faceless corporate entity.
This is where real ethnographic research into how humans use and plan to use the IoT is an important factor in the discussion. In June 2017 the Family Online Safety Institute in the UK held a roundtable looking at the family uses of and safety in the IoT. As a result, across 2018 the IoT4Kids programme, led by a team at Lancaster University, explored with children aged 9-11 what they want to do with the IoT, using a BBC Microbit as a programmable IoT device that the children could project their device and application aspirations onto. So, what do children want to use IoT devices for?
- They design “personal assistance” tools and applications – often involving location data, photographic data, video and audio, and datasets that reveal their image, voice, gender and geo-location. They imagine assistants that monitor behaviour and stage interventions, that offer surveillance and act in quasi-parental roles, and that infringe basic privacy.
- They imagine information-sharing and educational applications, the aim being “to replace teachers across the world with robots” or to enable peer-to-peer learning networks, creating a transfer of trust from teacher to machine.
- Boys specifically seek to allow for the simulation of activities not usually available to children – for example driving a car especially with the inclusion of augmented or virtual reality to allow for scary, imagined high-risk scenarios like driving through a war zone or a zombie apocalypse. Specifically, this desire to engage through simulation in the risky adult world is of a particular concern in the context of connected devices like gaming consoles because “risk takers” are a known target category for online grooming.
- Girls specifically described applications to assuage loneliness – to provide friendship, comfort, hugs, affection and to offer emotional support in response to a sensed emotional state. This focus on care-giving creates significant categories of risk in that a child who forms a trusting bond with a digital thing is likely to be at greater risk of online grooming, bullying and radicalization especially where the child is vulnerable or facing adversity and low self-esteem.
In the 1950s the psychologist Donald Winnicott coined the term “transitional object” to describe “any material (or thing) to which an infant attributes a special value and by means of which the child is able to make the necessary shift from the earliest… relationship with mother to genuine object-relationships”.
The object is self-appointed – the child has entire sovereignty in choosing it although they are in general too young to understand that they have made the choice. The object is also within the psychological control of the child – although coming to this realization is a part of the process of learning where the child stops and the object starts, and ultimately leads to them growing out of their dependence on it. Critically, the object is profoundly psychologically reliable – as an inanimate object it has no agency and is therefore always understandable and can be apprehended. Working through dependency on and then independence from a transitional object is a critical part of understanding one’s self and therefore becoming ready to relate to others.
What happens when a transitional object is not within a child’s psychological control, but instead is networked to parental control or the control of a service provider? What happens when the object is no longer psychologically reliable but subject to intermittent failures of internet connectivity or battery life? What happens if the place you learn about how to relate to the world around you is a place filled with unreliable things?