Trust no one. When it comes to implementing cybersecurity today, that’s the resounding imperative. Over the last 20 years, the collective IT and cybersecurity community has wielded a myriad of piecemeal approaches—with process, policy, regulations, and technology—in defending sensitive data and critical infrastructure, applications, identity, devices, and communications. And yet, with each new approach to cybersecurity, the threat landscape continued to evolve in complexity, sophistication, and efficacy. The old ways of defending and responding to cyberthreats are ineffective. Now, the most effective mindset and practice for cybersecurity is a Zero Trust posture. Let’s consider how we got here and what it means for the future of the DIB and all industrial sectors.
Cybersecurity Lessons from David and Melissa
Around March 26, 1999, a macro-virus, named Melissa began mass-mailing itself across the Internet using Microsoft Word and Outlook-based systems. Melissa had a simple attack plan—someone receives an email with a Word document attached that they unwittingly opened, then the macro imbedded within the file would access 50 addresses in that user’s mailing contact list, and then the same message and attachment would be propagated to the next lucky 50. Melissa was simple, elegant and expensive.
According to the FBI, its originator David Lee Smith of New Jersey caused over $80 million in damages to businesses and personal computers during the mere week-long incident. Many IT leaders and organizations (re)learned at the turn of the century that users are not great discerners of trust. In 2002, Smith was sentenced to 20 months in federal prison and a whopping $5000 fine, a lesson he also would not soon forget.
Yet, what did Smith and Melissa teach us about the need for a Zero Trust methodology?
- Humans are easy targets for breaches. Once someone opened the email-attached Word document, Melissa was free to run rampant across their network. User curiosity and the desire to operate expediently without considering the dangers or taking the time to inspect messages enabled Melissa to spread quickly before being detected.
- Our devices (or endpoints) can’t be trusted. The most popular computing devices in 1999 were running Windows 95 Second Edition and Windows 98. (We don’t talk about Windows ME.) Back then, organizations relied on third-party software for antivirus protection. There were no firewalls on computers, and devices in 1999 were largely unprotected due to a lack of updated antivirus and threat protection software. The device nor its owner could detect and defend against any malicious activity or changes in user behavior.
- Unsecure apps are an easy target to breach. In 1999, apps were infrequently updated. It took significant planning, people, and time to rollout updates to operating systems, threat protection software programs, and applications on all devices and endpoints. Once infected, organizations were vulnerable to an array of new attacks.
- Assuming everyone and everything accessing your network is trustworthy is a failure point. Melissa happened in the early days of the Internet. Networks were slow and remote access happened over dial-up or dedicated DSL trunks. All people and devices inside the organization and network were considered trusted. Melissa could take advantage of this glaring security hole.
- Once your data is compromised your entire system is at-risk. In this case, Melissa’s data payload was a macro program inside a Word document. Once the document was opened, a series of commands ran that disabled security features and resent the message to others. Without active monitoring, there was nothing to alert administrators or the network managers that something suspicious was happening.
- Infrastructure is only as good as it is intelligently managed. Melissa was able to quickly proliferate across multiple organization networks undetected. There were no signals from the servers that anything unusual was happening, and detection only happened when a system failure occurred.
The Melissa virus incident happened while most IT organizations were deploying new devices, operating systems, and applications to correctly display calendar dates beyond December 31, 1999 during Y2K. Melissa ushered in a wave of new forms of malware and cyberinfrastructure vulnerabilities. It also began the evolution of our cybersecurity posture from highly, reactive incident response to proactive cyber intelligence and behavior monitoring.
The FBI created a Cyber Division shortly after and in response to this event. Similarly, the Biden administration has released a recent series of cybersecurity-focused executive orders in the wake of many crippling attacks and created several new federal agency groups, such as the Defense Critical Supply Chain Task Force within the DoD.
Microsoft Introduces Trustworthy Computing
In 2002, Bill Gates published Microsoft’s Trustworthy Computing (TwC) Memo that began a broad series of efforts to build security, privacy, safety, and integrity into the design, development, and deployment of computing software. The memo marked a seminal awakening for Microsoft. Although, broadband and remote work didn’t exist yet, Microsoft understood that computing is as critical to a nation’s infrastructure as water, power, and communications. Therefore, it must be reliable, safe, and operate with integrity and without compromise.
The TwC development shift wasn’t just about getting rid of software bugs, deploying patches faster, writing better code, or achieving highly, available and resilient computing systems. It was acknowledgement that data breaches don’t just create negative financial and technological repercussions, they also drastically impact the reputation of a company or agency. If your core partners and customers can’t trust that your computing infrastructure is trustworthy and secure, why would they continue to do business with you? The TwC mindset was a cybersecurity shift that went beyond the Redmond-based company into every facet of government, business, and private citizens around the world.
In the wake of TwC, new federal regulations and policies were signed into law. Universities developed and revised curriculum to teach how to write and develop secure code. The computing industry changed its security posture to become more vigilant and proactive through enacted policies, procedures, and systems management capabilities that began to reduce the attack surface. However, Gates’ prescient memo forecasted that threats would evolve to attack everything from the silicon chips to the cloud and on to intelligent devices and services. Trustworthy Computing wasn’t merely an end goal, it became the first leg towards a Zero Trust journey in cybersecurity.
Software Assurance—the Next Leg of Zero Trust Evolution
Ten years after the TwC memo and a decade of new cybersecurity threats, Software Assurance emerged as the next grand principle for cybersecurity. The Committee on National Security Systems defines software assurance as follows:
“Application of technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner, are free from accidental or intentional vulnerabilities, provide security capabilities appropriate to the threat environment, and recover from intrusions and failures.”
For the first time, SwA moved business continuity to the forefront of the cybersecurity conversation. It recognized that “zero-day vulnerabilities” were inherent in the software. Moreover, SwA principles state when an exploit occurs the organization needs the ability to respond and recover fast. This shifted the concept of patching software from minimizing business disruption to reducing vectors of attack.
In 2012, the nation’s ability to acquire software that was free from vulnerabilities and worked as intended still wasn’t a reality. As a result, identifying and managing risk, along with threat modeling became cornerstones of SwA. In the beginning, SwA was to ensure the U.S. government was getting secure software to protect the nation’s infrastructure and economy. However, the proliferation of connected devices and the growth the Internet made the need for secure and confident cyberinfrastructure universal across all sectors of industry.
Advent of Defense in Depth to Protect the Digital Estate
Alongside software assurance principles, Defense in Depth emerged as the next approach to frustrate and complicate an attacker’s ability to breach an organization’s cyberinfrastructure and access critical data. Defense in Depth is a layered and diversified approach to cybersecurity. If a malicious actor is able to breakthrough one line of defense, the next layer will present a different countermeasure to deny further intrusion into the network. Defense in Depth employs multiple, redundant and diversified layers of security. Further, each countermeasure must be an effective layer of defense on its own. However, it is the totality of this cybersecurity approach that makes it effective in preventing a malicious actor from gaining unchallenged access.
Cyberthreats range from disgruntled employees and customers to nation-states and organized crime to force majeure. By creating a threat model and performing penetration tests for each type of attack scenario, organizations can lower their cybersecurity risk profile and reduce the network attack surface. Legacy network and cloud topologies have all been fortified using the Defense in Depth approach.
What happens when work is redefined as hybrid work and the digital estate is highly distributed?
Historically, our digital estate for the enterprise has always been well-defined and easier to protect with a Defense in Depth approach. However, what happens if the nature of work is fundamentally redefined as remote work and the digital estate is highly distributed? Should employee resources and applications remain behind corporate firewalls? Moreover, the sophistication, resourcefulness, and volume of digital adversaries are increasing. The most dangerous malicious actors are well-funded and persistent threats to every organization’s cyber infrastructure. In this time of digital transformation, we have the opportunity to rethink the next approach to cybersecurity with Zero Trust principles.
What Is Zero Trust Architecture?
In the old cybersecurity model, if you were an employee or application that was deployed by IT, you were implicitly trusted by default. In the Zero Trust Architecture, no person, application of device is trusted by default. In fact, the network should be considered a hostile environment and that malicious actors have already breached the environment. This is rooted in the belief that no cyber system can provide a 100% guarantee to detect and prevent a breach of any kind. By flipping the cybersecurity model on its head, Zero Trust resets how we approach securing our digital estate in a distributed cloud and remote work world.
Core Assumptions for Zero Trust
According to Evan Gilman and Doug Barth, authors of Zero Trust Networks: Building Secure Systems in Untrusted Networks, Zero Trust relies on five core assumptions:
- Assume the network has been breached.
- External and internal threats exist on the network at all times.
- Locally connected devices, applications, and people are not inherently trustworthy.
- All devices, users, applications, and usage cases should be authorized, authenticated and verified with the least privileges required for a task and monitored.
- Policies must be dynamic and aggregate data from as many sources as are available to create continued intelligence about what’s happening, where, and why across the digital estate.
Microsoft Zero Trust Guiding Principles
Microsoft’s simplifies these five assumptions into three guiding principles for implementing Zero Trust:
- Trust but verify. Identity credentials for anyone accessing your corporate network and applications must be vetted based on the device, location, services accessed, user-identity and other behaviors.
- Only allow least privileged access—meaning only grant the level of privileges needed to accomplish tasks appropriate for the requester gaining access to the network.
- Act as if you’ve been breached. All of your corporate assets should act as if a malicious actor has already gain access to your network and applications.
There are two facets of Zero Trust that are absolutely crucial for the DIB ecosystem to understand— (1) the intensity and effectiveness of cyberthreats are scaling up at an exponential rate; (2) Zero Trust is a data-centric model that requires intelligence from across your digital estate to keep your organization secure.
Cybersecurity Maturity Model Certification (CMMC) and Zero Trust
Is there a requirement for Zero Trust for the DIB? The short answer is “mostly.”
Certain practices explicitly require organizations to conduct activities or enact policies directly aligned with Zero Trust. For example, Configuration Management (CM) 2.062 asks companies to “Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.” However, other practices are not as directly tied to Zero Trust, but IT and Security leaders can elect (and are encouraged in other CMMC documents) to apply Zero Trust to the specific practice.
Despite CMMC not explicitly spelling out Zero Trust and its various doctrines, the overarching push by the Administration is towards Zero Trust for all Federal systems and all systems within the supply chain. On May 12, 2021, President Biden signed the Executive Order on Improving the Nation’s Cybersecurity. Under Sec. 3. Modernizing Federal Government Cybersecurity, the President has ordered that the entire Federal Government 1) move to secure cloud services, including Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) and 2) advance toward Zero Trust Architecture.
The order further states that CISA shall by reviewing “agency-specific cybersecurity requirements that currently exist” which includes CMMC “… and recommend to the FAR Council standardized contract language”. CISA is also charged to modernize its current cybersecurity programs and be fully functional with cloud-computing environments with Zero Trust Architecture. In July 2021, the Administration released a memo directed at creating new cybersecurity practices and postures for Critical Infrastructure, like the electricity subsector, natural gas, and water/wastewater sector. These new practices will be largely based off of National Institute of Standards and Technology (NIST) frameworks, including Special Publication (SP) 800-207, "Zero Trust Architecture". According to MeriTalk, DoD is up next and officials are currently drafting a memo to incorporate CMMC and Zero Trust Architecture for the DIB.
As the network perimeter evaporates and the digital estate becomes highly distributed, Zero Trust makes security far more effective for the DIB. Zero Trust is a fundamental shift away from business as usual. In fact, some corporations have already taken an Internet-first approach for all employees and devices connecting to the digital estate. Moreover, these companies have moved all employee workloads into secure, Zero Trust cloud architectures.
The Start of the Zero Trust Journey
Over the last 20 years, our cybersecurity posture has evolved from Trustworthy Computing layered with Software Assurance layered with Defense in Depth. CMMC incorporates best practices from all of these cyber frameworks so the DIB can be a highly secure and resilient community.
Implementing Zero Trust principles is not an off-the-shelf acquisition or consultative solution—it’s a journey. You can start your journey by assessing your organization’s current cyber posture against the three guiding principles identified above. Ask the following questions to get started:
- If my network was under attack, what signals do I have available to me to see the progress of the breach, initiate countermeasures and recovery, and evict the adversary and restore business continuity? (CMMC – AU, IR, RE, SA, SI)
- What measures have I taken to authorize and grant access to my digital estate for digital identity, devices, applications, and behaviors? (CMMC – AC, IA, SC)
- Are my policies across the network dynamic to only grant the least privilege necessary to perform tasks and provide checks when a policy is circumvented or violated? (CMMC – AU, CM, CA, SI)
It’s time to ask some hard questions regarding the cybersecurity posture of organizations. The economic cost of a breach is far greater than it was when the Melissa macro-virus was released. Malicious adversaries can persist inside your digital estate for years without detection. Moreover, the reputational fallout of a cyber breach has far greater collateral damage to your brand than it did at the turn of the century. Zero Trust is the next opportunity to evolve and meet the cyber threats to our digital estate.
Trust no one and no thing on your network – always verify. However, you can find a trustworthy partner to help you begin your Zero Trust journey today (shameless Summit 7 plug).
We invite you to read the other articles in this series on Zero Trust.