An eighties classic – Zero Trust

An eighties classic – Zero Trust


A deep-dive in Zero-trust, to help you navigate in a zero-trust world and further secure your organization.

Last week, at ChannelCon in Chicago, I participated on a panel titled ‘Building trust in a Zero Trust world’ with several other industry experts. The core concept of Zero Trust is ‘trust nothing, verify everything’ and for many in the cybersecurity industry this has been the mantra we have lived by for our whole careers. And, throughout my career there have been many terms and acronyms used in the information technology industry that have proved to be ‘for the moment’ or ‘fashionable’, the term Zero Trust does not fall into this group.

Long ago in a galaxy far far away, well, not that long ago really and only across the pond, I worked for several notable financial organizations where security was a paranoia topic within technology teams. In the late eighties a project I worked on stands out as an excellent example of this – the deployment of laptops to salespeople in the field, giving them access to comparative and account data ahead of an appointment with the customer. The data synchronization, for tomorrow’s appointments, was an end-of-day task utilizing a 2400 baud modem (compressed data with an effective transfer rate of 4800 baud) with hardware based DES encryption, and the user authenticated with a challenge response PIN protected token. There were additional security checks built into the underlying software to ensure the device was permitted to connect, checking unique hardware identifiers. The concept of taking mainframe hosted data, throwing it on a Novell file server, and then distributing it onto remote laptops in the field was bleeding edge technology, and it caused many sleepless nights for mainframe security teams who considered this new generation of PC pioneers as wild west cowboys; the paranoia was intense.

The lack of trust in this bleeding edge project caused a zero-trust attitude, ‘trust nothing and verify everything’, and then, when possible, ‘verify it again’. The personal computer industry evolved quickly and in many instances this mainframe ‘host’ paranoia was dampened and possibly even set aside. Yet, here we are today talking about a similar approach, albeit more defined and grown-up than my experience in the late eighties. Oh, how I miss the eighties –  my vinyl collection reminds me of those great times every day!

Zero-trust in today’s technology environment is about instilling this same paranoia with a holistic view of the entire digital enviroment, regardless of location; on-premise, remote, cloud, who owns it, who may be using it, etc. The rapid digital transformation of the last few years has forced companies to adopt, at least in part, some of the concepts that are deep routed within zero trust, such as multi-factor authentication and encryption. But this concept is less about specific technologies and more a mindset; for example – when a new employee joins a finance department, it’s easy for the busy manager to blanket approve access to all the systems the team uses. However, in the world of zero trust the manager needs to give more thought to what systems truly need to be accessed for the employee’s function, from what devices and which locations, possibly even extending to limits on access based on the time of day. This shift in thinking needs to be business wide, not just a concept that the IT security team advocate for; there needs to be endorsement from the C-level down, throughout the entire organization.

There are numerous benefits to adopting a zero-trust model, one benefit that may not be obvious is ‘simplification’. If the entire digital environment, whether owned or used as a service, is treated as having no perimeter, then the process of protecting diverse assets becomes simplified; this is also true of users, as they will all be subject to the same access policies. Overlaying this approach with data-based decisions, which are likely to be automated, takes this to the next level. In a scenario that a user is connected and complies to location, device, authentication, etc. but real-time analysis of traffic from that device shows an anomaly, then the access granted could be revoked dynamically, requiring further investigation and possible remediation of what caused the alert.

The monitoring and analysis of real-time events in this way can be achieved by using technologies such as Endpoint Detection and Response (EDR). Automation of this type brings significant benefit: it restricts the ability of potential attackers gaining significant advantage as they are hampered by dynamic real-time policy enforcement – for example, lateral movement within the network could be prohibited based on the unusual or unexpected actions the attackers are creating.

Real-time intelligence decision making was not available for the project I was involved in back in the eighties; I am certain though that had it been, the paranoid security teams attempting to control the new wild west of PC deployment would have insisted on it being used, and rightly so.

You May Also Like…