Skip to main content

Zero trust


Zero trust is a cybersecurity strategy that assumes all users, devices and transactions are already compromised.

The zero-trust model requires strict identity and device verification, regardless of the user's location in relation to the network perimeter. A network that implements the zero-trust model is referred to as a zero-trust network.

The traditional approach to network security is known as the castle-and-moat model. The focus of this concept is that gaining access to a network from the outside is difficult, but once inside the firewall, users are automatically trusted.
While there are various technologies and principles that can be used to enforce zero trust security, the basic fundamentals include the following:
  • Microsegmentation -- Security perimeters and network components are broken into smaller segments, each of which has its own access requirements.
  • Least-privileged access -- Users are only granted access to what they need to do their job effectively.
  • Risk management analytics -- All network traffic is logged and inspected for suspicious activity. 

The use of zero-trust models is becoming more prevalent in the world of security access controls, as is the principle of least privilege. The message is clear: The fewer people who can access data, the more secure it is. And 
for those who can access the data, make sure they access only what they absolutely need.
The policy-based identity governance aspect of identity and access management is also becoming increasingly important to data security.

When these measures aren't put in place, or when they're not enforced, enterprises may find themselves experiencing data exposure. No company should take IAM for granted. Every business faces the potential of a breach, as evidenced by the access control struggles experience by Amazon Web Services customers and the more recent Reddit data breach.
As we watch more companies fail to properly configure and enforce IAM policies, it becomes even more clear how important a role identity- and access-based security plays in keeping enterprise and individual data safe.

In the past, we've done a great job of making networks accessible. But with this increased availability, we've opened the door for attackers to move more easily around networks.

However, as we introduce mobility and cloud solutions, our networks are evolving and perimeters are dissolving. With that being said, we are still building networks on a rigid, zone-based model, and the assumption is still being made that systems on the internal LAN are safer than external systems. This assumption has us applying different levels of trust based on the physical or logical location of systems; historically, this has been proven not to work in the long term.

Today, we continue to use choke points, filtering devices and network gear to funnel traffic between these zones, but this isn't always efficient, secure or scalable when additional zones are needed. Segmentation is a basic tenet of information security, and using a zero-trust model shifts the mindset of where to segment and how to apply policies to endpoints.

In a zero-trust network, all the devices are deemed compromised and untrusted. It is here that policies, authentication variables, authorization and baselining help determine the trust level of systems.

Authentication variables are not only an important part of zero-trust networks, but they are also an important part of gaining access to a system, application or data. It's in this phase that a system or user actually proves that they are who they say they are, and it is also what determines whether they have the proper authorization.

When using a zero-trust mindset, there are multiple ways to set up authentication to build security into your sessions -- this can be device-based, user-based or a combination of the two.

The perimeter has melted and zones are no longer properly trusted, so it's important to have all the sessions properly authenticated. This can be done using X.509 certificates and a user account that uses two-factor authentication. Using a combination of these methods can create stronger authentication variables and enable finer access to resources. After being properly authenticated into a zero-trust network, these authentication variables can also be used as decision points to gain access to resources.

When implementing a zero-trust network, there needs to be an understanding of how authorization should be handled. Authorization in a zero-trust architecture is indispensable when determining what resources and data will be allowed on devices.

Zero-trust networking depends on the principle of least privilege, as it understands that people and devices are authenticating from different locations and applications. A policy must be created to allow this to occur; single forms of authentication sufficient to perform authorization under the zero-trust mantra are no longer sufficient.

We need to take into account what can be used to identify and authorize an identity in a zero-trust network. This means creating a policy based on a combination of system and user accounts; doing so results in a unique authorization decision that uses the variables of this request. The policy might also include anything about the authorization request that a policy is expecting to fulfil granular access, such as the destination, IP address, hardware information, risk and trust scores, and authentication methods. In a zero-trust network, users should always be given the least level of privilege necessary until there is a valid need to escalate their access.

Some vendors have made it easier to create zero-trust networks, but they aren't the be-all and end-all. Even though enterprises are able to create zero-trust networks without them, these vendors do offer great opportunities to organizations that might not have the resources to develop a program on their own.
A common way of using this technology -- which is similar to software-defined networking -- is for all the systems to use encryption when communicating over the data planes, which enforces the policies. By pushing this down to a low level within the network, users and devices are able to make decisions quickly and securely. There's also the ability to use trust or risk scores to create access requests based off of the resource for which users request access.

When we grasp the idea that everything in the network should be put through the ringer before any type of trust can be applied to it, we reach the mindset of zero-trust. Using these methods and adopting the mantra of never trust and always verify will help reduce risk in your network and limit an adversary's ability to move freely within your environment.

Comments

Popular posts from this blog

Black swan

A  black swan event  is an incident that occurs randomly and unexpectedly and has wide-spread ramifications. The event is usually followed with reflection and a flawed rationalization that it was inevitable. The phrase illustrates the frailty of inductive reasoning and the danger of making sweeping generalizations from limited observations. The term came from the idea that if a man saw a thousand swans and they were all white, he might logically conclude that all swans are white. The flaw in his logic is that even when the premises are true, the conclusion can still be false. In other words, just because the man has never seen a black swan, it does not mean they do not exist. As Dutch explorers discovered in 1697, black swans are simply outliers -- rare birds, unknown to Europeans until Willem de Vlamingh and his crew visited Australia. Statistician Nassim Nicholas Taleb uses the phrase black swan as a metaphor for how humans deal with unpredictable events in his 2007...

A Graphics Processing Unit (GPU)

A graphics processing unit (GPU) is a computer chip that performs rapid mathematical calculations, primarily for the purpose of rendering images. A GPU may be found integrated with a central processing unit (CPU) on the same circuit, on a graphics card or in the motherboard of a personal computer or server. In the early days of computing, the CPU performed these calculations. As more graphics-intensive applications such as AutoCAD were developed; however, their demands put strain on the CPU and degraded performance. GPUs came about as a way to offload those tasks from CPUs, freeing up their processing power. NVIDIA, AMD, Intel and ARM are some of the major players in the GPU market. GPU vs. CPU A graphics processing unit is able to render images more quickly than a central processing unit because of its parallel processing architecture, which allows it to perform multiple calculations at the same time. A single CPU does not have this capability, although multi...

6G (sixth-generation wireless)

6G (sixth-generation wireless) is the successor to 5G cellular technology. 6G networks will be able to use higher frequencies than 5G networks and provide substantially higher capacity and much lower latency. One of the goals of the 6G Internet will be to support one micro-second latency communications, representing 1,000 times faster -- or 1/1000th the latency -- than one millisecond throughput. The 6G technology market is expected to facilitate large improvements in the areas of imaging, presence technology and location awareness. Working in conjunction with AI, the computational infrastructure of 6G will be able to autonomously determine the best location for computing to occur; this includes decisions about data storage, processing and sharing.  Advantages of 6G over 5G 6G is expected to support 1 terabyte per second (Tbps) speeds. This level of capacity and latency will be unprecedented and wi...