Skip to main content

Zero trust


Zero trust is a cybersecurity strategy that assumes all users, devices and transactions are already compromised.

The zero-trust model requires strict identity and device verification, regardless of the user's location in relation to the network perimeter. A network that implements the zero-trust model is referred to as a zero-trust network.

The traditional approach to network security is known as the castle-and-moat model. The focus of this concept is that gaining access to a network from the outside is difficult, but once inside the firewall, users are automatically trusted.
While there are various technologies and principles that can be used to enforce zero trust security, the basic fundamentals include the following:
  • Microsegmentation -- Security perimeters and network components are broken into smaller segments, each of which has its own access requirements.
  • Least-privileged access -- Users are only granted access to what they need to do their job effectively.
  • Risk management analytics -- All network traffic is logged and inspected for suspicious activity. 

The use of zero-trust models is becoming more prevalent in the world of security access controls, as is the principle of least privilege. The message is clear: The fewer people who can access data, the more secure it is. And 
for those who can access the data, make sure they access only what they absolutely need.
The policy-based identity governance aspect of identity and access management is also becoming increasingly important to data security.

When these measures aren't put in place, or when they're not enforced, enterprises may find themselves experiencing data exposure. No company should take IAM for granted. Every business faces the potential of a breach, as evidenced by the access control struggles experience by Amazon Web Services customers and the more recent Reddit data breach.
As we watch more companies fail to properly configure and enforce IAM policies, it becomes even more clear how important a role identity- and access-based security plays in keeping enterprise and individual data safe.

In the past, we've done a great job of making networks accessible. But with this increased availability, we've opened the door for attackers to move more easily around networks.

However, as we introduce mobility and cloud solutions, our networks are evolving and perimeters are dissolving. With that being said, we are still building networks on a rigid, zone-based model, and the assumption is still being made that systems on the internal LAN are safer than external systems. This assumption has us applying different levels of trust based on the physical or logical location of systems; historically, this has been proven not to work in the long term.

Today, we continue to use choke points, filtering devices and network gear to funnel traffic between these zones, but this isn't always efficient, secure or scalable when additional zones are needed. Segmentation is a basic tenet of information security, and using a zero-trust model shifts the mindset of where to segment and how to apply policies to endpoints.

In a zero-trust network, all the devices are deemed compromised and untrusted. It is here that policies, authentication variables, authorization and baselining help determine the trust level of systems.

Authentication variables are not only an important part of zero-trust networks, but they are also an important part of gaining access to a system, application or data. It's in this phase that a system or user actually proves that they are who they say they are, and it is also what determines whether they have the proper authorization.

When using a zero-trust mindset, there are multiple ways to set up authentication to build security into your sessions -- this can be device-based, user-based or a combination of the two.

The perimeter has melted and zones are no longer properly trusted, so it's important to have all the sessions properly authenticated. This can be done using X.509 certificates and a user account that uses two-factor authentication. Using a combination of these methods can create stronger authentication variables and enable finer access to resources. After being properly authenticated into a zero-trust network, these authentication variables can also be used as decision points to gain access to resources.

When implementing a zero-trust network, there needs to be an understanding of how authorization should be handled. Authorization in a zero-trust architecture is indispensable when determining what resources and data will be allowed on devices.

Zero-trust networking depends on the principle of least privilege, as it understands that people and devices are authenticating from different locations and applications. A policy must be created to allow this to occur; single forms of authentication sufficient to perform authorization under the zero-trust mantra are no longer sufficient.

We need to take into account what can be used to identify and authorize an identity in a zero-trust network. This means creating a policy based on a combination of system and user accounts; doing so results in a unique authorization decision that uses the variables of this request. The policy might also include anything about the authorization request that a policy is expecting to fulfil granular access, such as the destination, IP address, hardware information, risk and trust scores, and authentication methods. In a zero-trust network, users should always be given the least level of privilege necessary until there is a valid need to escalate their access.

Some vendors have made it easier to create zero-trust networks, but they aren't the be-all and end-all. Even though enterprises are able to create zero-trust networks without them, these vendors do offer great opportunities to organizations that might not have the resources to develop a program on their own.
A common way of using this technology -- which is similar to software-defined networking -- is for all the systems to use encryption when communicating over the data planes, which enforces the policies. By pushing this down to a low level within the network, users and devices are able to make decisions quickly and securely. There's also the ability to use trust or risk scores to create access requests based off of the resource for which users request access.

When we grasp the idea that everything in the network should be put through the ringer before any type of trust can be applied to it, we reach the mindset of zero-trust. Using these methods and adopting the mantra of never trust and always verify will help reduce risk in your network and limit an adversary's ability to move freely within your environment.

Comments

Popular posts from this blog

Understanding the Evolution: AI, ML, Deep Learning, and Gen AI

In the ever-evolving landscape of artificial intelligence (AI) and machine learning (ML), one of the most intriguing advancements is the emergence of General AI (Gen AI). To grasp its significance, it's essential to first distinguish between these interconnected but distinct technologies. AI, ML, and Deep Learning: The Building Blocks Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. Machine Learning, a subset of AI, empowers machines to learn from data and improve over time without explicit programming. Deep Learning, a specialized subset of ML, involves neural networks with many layers (hence "deep"), capable of learning intricate patterns from vast amounts of data. Enter General AI (Gen AI): Unraveling the Next Frontier Unlike traditional AI systems that excel in specific tasks (narrow AI), General AI aims to replicate human cognitive abilities across various domains. I...

Normalization of Database

Database Normalisation is a technique of organizing the data in the database. Normalization is a systematic approach of decomposing tables to eliminate data redundancy and undesirable characteristics like Insertion, Update and Deletion Anamolies. It is a multi-step process that puts data into tabular form by removing duplicated data from the relation tables. Normalization is used for mainly two purpose, Eliminating reduntant(useless) data. Ensuring data dependencies make sense i.e data is logically stored. Problem Without Normalization Without Normalization, it becomes difficult to handle and update the database, without facing data loss. Insertion, Updation and Deletion Anamolies are very frequent if Database is not Normalized. To understand these anomalies let us take an example of  Student  table. S_id S_Name S_Address Subject_opted 401 Adam Noida Bio 402 Alex Panipat Maths 403 Stuart Jammu Maths 404 Adam Noida Physics Updation Anamoly :  To upda...

How to deal with a toxic working environment

Handling a toxic working environment can be challenging, but there are steps you can take to address the situation and improve your experience at work: Recognize the Signs : Identify the specific behaviors or situations that contribute to the toxicity in your workplace. This could include bullying, harassment, micromanagement, negativity, or lack of support from management. Maintain Boundaries : Set boundaries to protect your mental and emotional well-being. This may involve limiting interactions with toxic individuals, avoiding gossip or negative conversations, and prioritizing self-care outside of work. Seek Support : Reach out to trusted colleagues, friends, or family members for support and advice. Sharing your experiences with others can help you feel less isolated and provide perspective on the situation. Document Incidents : Keep a record of any incidents or behaviors that contribute to the toxic environment, including dates, times, and specific details. This documentation may b...