Staying Secure in 2017: A Step-by-Step Guide to Guarding Your Organization

The actionable steps to staying safe in the age of cybercrime

By Bob Janssen, CTO, founder and SVP of Innovation, RES

Bob Janssen CTO and Founder of RESAs Forbes’ Technology Council recently proclaimed, 2017 will be the year of cybersecurity concern. While there are a great deal of individuals and organizations doing tremendous work to combat the concern, cybersecurity can sometimes seem like a distant problem as opposed to an everyday issue. Instead of speaking about cybersecurity in grandiose terms, it is imperative to get down to business and face the ‘real world’ risks, and the practical steps that IT and security teams can take to prevent their organizations from becoming an embarrassing security breach headline. It is impossible for organizations to devote 100% of their time to state-of-the-art cybersecurity excellency, especially considering the dearth in cybersecurity talent that many firms deal with.

Cybersecurity professionals and IT departments are given the tough task to keep spending down, empowering employees to be the most productive, and not to mention – responding quickly to ever-changing security threats like the augmentation of IoT and bots. Considering these pressures, it’s integral to prioritize decisions and actions that have an affect security imperatives sooner rather than later.

There is one great thing organizations can do to accomplish this, develop a dynamic whitelisting strategy for access management. Whitelisting provides more than a list of trusted websites, apps or users. Whitelisting can also facilitate the enforcement of security access controls based on individual identities and contextual attributes such as time of day or location. When you use whitelisting properly, it can help secure your data and protect your organization from threats. Below you will be able to find a guide that security professionals can follow to protect their organizations without slowing down regular business operations.

Step 1: Reexamine and step up whitelisting policies

Ask: “Do we have a central repository of well-defined whitelisting policies?”

Dynamic whitelisting is a core best practice for enterprise security and one of the best ways to enforce access policies.  It entails restricting user access and code execution by default to only that which is specifically permitted and known to be safe. Whitelisting should also take into consideration both identity and context attributes such as time of day, location or device. This model is essential for protecting your organization from all kinds of threats — including malicious hosts, hijacked user IDs, insider threats, and the like.

A primary security requirement for an organization is therefore a unified repository of clearly defined whitelisting policies. These policies can be owned and controlled by different individuals with appropriate authority across an organization, but a single, reliable, and up-to-date place for maintaining whitelisting policies is essential across all resources, parameters, and user groups.

Step 2: Don’t depend on “script heroes”  

Ask: “Does our implementation and enforcement of our access policies still depend on manual configuration and/or homegrown scripts?”

Policies alone do not make a secure enterprise. An organization also needs a way to implement and enforce those policies in an automated way. Chances are, however, that an organization still depends on a wide range of disparate mechanisms to give users whitelist-appropriate access to digital resources. These likely include application- and database-specific admin tools and homegrown provisioning scripts.

There are many problems inherent in depending on these fragmented access provisioning mechanisms. From a security perspective, they are simply too unreliable because they are subject to human error and they’re not intrinsically linked to the underlying policies they have been created to enforce. If an organization still depends on “script heroes” to ensure the right people get access to the right resources at the right time, it is exposing itself to unnecessary risk. Instead, maintaining a unified, manageable, and automated mechanism for executing an organization’s access policies can offset these concerns.

Step 3: When employees leave – make sure your data doesn’t leave with them

Ask: “When someone leaves our company, are all of their digital privileges immediately, automatically, and entirely revoked?”

One of the single most important policy imperatives is the complete revocation of an employee’s digital privileges immediately upon termination. Most organizations don’t have a simple, automated, and reliable means of immediately eliminating an individual’s access privileges across every application, database, SharePoint instance, communications service, etc. Some of those privileges can remain in place days, weeks, or even months after an employee is terminated — leaving them exposed to risks that their breach detection and prevention tools can’t stop.

This is why in addition to having a unified system for managing access privileges across the enterprise, an organization also needs to appropriately integrate that system with whatever other systems can generate a valid termination event — including an organization’s core identity management systems, HR applications, and contractor databases. Only such integration can give an organization full confidence in the timely and complete revocation of digital privileges.

Step 4: Put access controls in place

Ask: “Can we reliably prevent users from accessing the wrong files from the wrong places at the wrong times?”

Most organizations can only apply a limited and relatively crude set of parameters to their access controls. In the real world, an organization’s access policy parameters and controls must be much richer and more context-aware. Common examples of this include:

  • Geo-fencing. It often makes sense to constrain a user’s access privileges based on location. A doctor, for example, may be allowed wireless access to certain clinical systems data while on premise at a healthcare facility, but not while off-site.
  • Wi-Fi security. There may be times when an organization wants to make its data access rules (including read/write vs. read-only privileges) contingent upon whether a user’s Wi-Fi connection is public/non-secure or private/secure.
  • File hashing. File hashes provide an exceptionally reliable means of ensuring that users only download, open, and work with legitimate content — thereby protecting an organization from a wide range of threats, including ransomware and spearphishing attacks.

To implement these kinds of rich security controls, an organization needs an access management system that can automatically respond in real time to session context and execute hash-based identification. Without those controls, defense against various types of identity and content spoofing will be severely limited.

Step 5: Make sure your security process is adaptable  

Ask: “Do we have a consistent process for adding new applications (including cloud/SaaS) to our whitelist as demanded by the business — and applying the appropriate policies to them?”

An organization’s business isn’t static. In fact, most companies are adding new cloud/SaaS services at a faster pace than ever. Many of these new services are being activated directly by lines of business, without much involvement from IT. At one time, this was referred to as “shadow IT.” But it’s not just a shadow anymore. It’s central to how organizations leverage software and analytic innovation in the cloud.

If an organization can’t quickly secure these new applications and services, several unacceptable outcomes can result. People may be unable to use new resources in a timely manner because they’re blocked by an organization’s whitelisting system. Or new resources may get whitelisted too hastily — without being properly secured by policies such as geo-fencing and Wi-Fi restrictions. Worse yet, people may just come up with workarounds to avoid an organization’s security mechanisms altogether. None of these outcomes are acceptable.

To avoid these outcomes, an organization need a fast, reliable, and consistent process for adding new cloud resources (as well as new conventionally developed applications) to its whitelisting repository/automation engine. Without such a process, an organization’s security won’t be able to keep up with its business — which means an organization will either compromise the former or impede the latter.

Step 6: Empower self-servicing

Ask: “Have we met the needs of the business for consumerization/self-service and LOB delegation?”

The millennial workforce is increasingly expecting IT to provide consumerized self-service similar to what they experience in their personal use of technology. Self-service is a win-win for IT and the business. The business wins because self-service takes delay out of everyday requests for digital services. IT wins because it frees staff with limited time from a variety of routine tasks. Self-service can also include the delegation of certain administrative tasks to line-of-business managers — such as the authorizing access privileges or adding software licenses.

The best way to provide self-service and delegation to the business is by extending an organization’s security whitelist automation engine to non-IT users with the appropriate policy-based controls. This approach allows an organization to ensure that no one outside of its cybersecurity team can violate its policies — even as an organization empowers them to quickly perform routine tasks without IT’s intervention.

Step 7: Prepare for an audit

Ask: “Are we ready to handle an audit – really?”

Even if an organization has “checked off” all of the above six items – none of it matters if an organization cannot credibly prove itself to an auditor.

That’s why an organization needs a unified, rules-based access whitelisting automation engine that’s fully self-documenting. Only a centralized permissions control “brain” can secure an organization’s environment and enable an organization to quickly and easily provide auditors with credible evidence that it has exercised full diligence.

By leveraging a single, robust access provisioning mechanism across all of its digital resources — from its most complex core business applications to its most recently adopted cloud service — an organization can make itself vastly more secure, while enhancing productivity, and not unnecessarily adding to daily workload.

Bio: Bob Janssen is the CTO, founder, and SVP of Innovation of RES. He has been responsible for product vision, strategy and development at RES since founding the company in 1999 and is a prominent RES spokesperson at industry events. He was instrumental in the creation of the flagship products, RES Workspace Manager (now RES ONE Workspace) and RES Automation Manager (now RES ONE Automation), released in 1999 and 2005, respectively. During his tenure, RES has sold millions of licenses worldwide. Mr. Janssen holds several patents for the solutions he has developed at RES, and has worked with the RES R&D team on the filing of numerous others.

Looking at Intent-based Security and Rethinking Application Security with Twistlock CEO Ben Bernstein (Podcast)

twistlock

Preston and I interviewed Twistlock CEO Ben Bernstein about his company’s approach to container-based security from a new perspective known as intent-based security, which also has us rethinking application security. Ben gives us an overview of intent-based security and a detailed explanation of why a new perspective is important to application security.

  1. Ben’s concept of intentbased security is evolving not only the way organizations build applications as DevOps adoption, and with it container adoption, continues to rise, but also rethinking the approach to application security to address fundamental application intent issues
  2. Why it is so difficult for IT, security and dev teams to look at an app and deduce intent
  3. Why attacks on the application layer are harder to detect than the network layer and more difficult to contain
  4. How to effectively add security to a container-based implementation of DevOps

Podcast details: Length – 20:55 minutes. MP3 format. G rating for all audiences.

Get your own copy of the ebook mentioned in the podcast, “How to Securely Configure a Linux Host to Run Containers“.

As discussed in the podcast, don’t assume anything about security for your container hosts or your containers. Container hosts must be thoughtfully secured, because if someone compromises your host; he owns your containers. Securing applications and their containers requires more than cursory security tests. You must build your applications with security in mind and you must also securely build your containers for those applications.