Friday, August 16, 2013

Rollout Strategy for Data Loss Prevention

Implementation of Data Loss Prevention (DLP) system in an organization is important for security of sensitive information however it has always proved very tricky keeping in view the outcome, sales representatives boast about. It is a fact that successful DLP implementation achieves what it claims but it takes a lot of time and efforts from the organization to get what it wants at the initial stages of implementation. Since there is normally a huge price tag associated with an end to end DLP solution, companies find it hard to justify the amount of time it will take to look effective. 

It is always a good practice to follow these steps as part of a lengthy exercise, I have found this sequence successful while applying in a number of large organizations.


(Note: Some terminologies in this article are taken from Symantec DLP solution components)

A. Enable or Customize Policy Templates 


B. Discover 
    - Identify scan targets 

    - Run scan to find sensitive data on network & endpoint data 


C. Monitor 
    - Inspect data being sent 
    - Monitor network & endpoint events 

D. Protect 
    - Block, remove or encrypt 
    - Quarantine or copy files 
    - Notify employee & manager 

E. Re-mediate and report on risk reduction
The whole process is quite time taking and requires gradual improvement constantly. I recommend adopting a phase wise approach achieving all these goals towards data protection. 

Phase I (Policies and Templates) 

 1. Data Classification 

 2. Identification of data classified with applicable Symantec DLP policies

 3. Introduction of Data into DLP System


- Sample data population for Exact Data Matching techniques (EDM) based policies 

- Sample data population for Described Content Matching (DCM) based policies

- Sample data population for Indexed Data Matching techniques (IDM) based policies

- Population of +ive and –ive samples for Vector Machine Learning techniques (VML)

 4. Start monitoring SMTP and HTTP traffic

 5. Identify stake holders for notification receiving and access restrictions 


Phase II (Pilot Audience) 
Notification will be for Administrators only

Network 
1- EDM Policy Implementation on Pilot users 

2- DCM based Policy Implementation on Pilot Users

3- IDM based Policy Implementation on Pilot Users

Databases
1. EDM Policy Implementation on Pilot data repositories 

Endpoints
1. EDM Policy Implementation on Pilot users 

2. DCM based Policy Implementation on Pilot Users

3. IDM based Policy Implementation on Pilot Users

4. Policy review and fine tuning

Phase III (Prevent Mode Enabled for Pilot Users) Notification will be for Administrators and pilot users only 

1. Extension of notification scope to pilot end users

2. Fine tune applied policies based on end user feedback

3. Identification of policies where action mode must be blocking access

4. Blocking access to resources where required. Action includes notification to all stake holders.

Phase IV (Review and Re-adjustments) 1- Review information and tune up policies based on information from end users and network monitors 

2- Involve concerned departments to identify their specific requirements

3- Re-align scope of network monitors for monitoring of HTTP and SMTP traffic

4- Addition or removal of protocols to be monitored through network monitor

5- Identify production data repositories to protect

6- Implement policies on production data repositories

Phase V (Monitoring Network and Storage) Notifications will be for Administrators and end users 

1. Intimate and educate end users

2. Policies rollout to QA Department users

3. Policies roll out to non-critical user departments (finance, marketing, corporate communication etc)

4. Policies roll out to critical departments (call centers, administration, sales etc)

5. Feedback collection and fine-tuning

Phase VI (Monitoring Endpoints) Notifications will be for Administrators and end users 

1. Policies rollout to QA Department users

2. Policies roll out to non-critical user departments (finance, marketing, corporate communication etc)

3. Policies roll out to critical departments (call centers, administration, sales etc)

4. Feedback collection and policies fine-tuning

Phase VII (Protecting through Network & Storage) Notifications will be for Administrators and end users 

1. Intimate and educate end users

2. Policies modified to QA Department users

3. Policies modification for non-critical user departments (finance, marketing, corporate communication etc)

4. Policies modification for critical departments (call centers, administration, sales etc)

5. Feedback collection and fine-tuning 


Phase VIII (Protecting on Endpoints) Notifications will be for Administrators and end users 

1. Policies rollout to QA Department users

2. Policies roll out to non-critical user departments (finance, marketing, corporate communication etc)

3. Policies roll out to critical departments (call centers, administration, sales etc)

4. Feedback collection and policies fine-tuning

Friday, May 17, 2013

NTLM Authentication VS DC Interface (A comparison of Symantec Web Gateway Features)


Introduction
Symantec Web Gateway (SWG) is state of the art proxy and web filtering solution for corporate local area networks. It has the capability to authenticate end users and provide them secure web browsing experience as per organization’s policies and requirements.

SWG can use one of the 2 authentication mechanisms available in it named

-         Domain Controller Interface (DCI) 
-         NTML based Authentication

SWG can only use one of these methods at a time.

Comparison of NTLM authentication and DC Interface Mechanisms
NTLM and DC Interface provide different kinds of authentication mechanisms and have difference in functionality as well.

DC Interface
DCI works by integrating with domain controllers in an organization. In order to do so we need to install a small piece of software on domain controller. This software actually integrates SWG with corporate domain.

How DCI Works
The SWG connects routinely to the DC to obtain all known users LDAP group information.
1-      User logs on to computer.
2-      DC Interface agent on Domain Controller detects logon event and sends user details and IP address to SWG.
3-      User connects to Internet.
4-      SWG matches connecting IP address to user with information received from DC Interface.
5-      SWG obtains LDAP group membership information from DC.
6-      SWG applies appropriate policy based on LDAP information.
7-      In the event that no matching logged on Domain User is identified, the SWG will apply the next IP based policy or the default policy.

NTLM Authentication
NTLM Authentication configuration accomplishes by providing corporate domain controller’s IP and credentials to SWG’s configuration tab for NTLM authentication. It does not require installation of any additional software on domain controller.

How NTLM Authentication Works
1- SWG Administrator creates an Authentication policy set to Ignore, Authenticate no Enforce or Enforce.
2- The SWG connects routinely to the DC to obtain all known users LDAP group information.
3- User connects to the Internet site via the proxy.
4- Users browser receives an NTLM challenge from the Web Gateway.
5- Users browser responds transparently with a hash of the users credentials.
6- The Web Gateway connects to Domain Controller (noted in LDAP settings) to verify credentials.
8-      If verification succeeds, policies are applied according to LDAP information.
9-      In the event that the NTLM process is not working correctly, or the users LDAP information is not yet known, the SWG will apply the next IP based policy or the default policy.

Comparison of NTLM and DC Interface Features

NTLM has some Advantages over DC Interface


DCI
NTLM
Provides only user identification service.
Provides both Identification and Authentication services
Integration with domain controller requires installation of agent software on at least one of the domain controllers in the environment
Integration with domain controller does not require any additional software
Policy is mapped on the basis of initially assigned IP to a machine. This results is policy mismatch if user switches the machine
Policy is based on username and only works for designated user

Tuesday, February 26, 2013

Mobile Web Browsers The New Men in the Middle?


It was not very long ago when HTC’s dream came true in the form of first ever Android Operating system based smart phone. 

Initially it was a battle to beat the iOS that later gave us “Android” as the most popular smart phone OS (thanks to a timely acquisition by Google).

Most popular!  . . . Agreed,

Very Convenient!  . . . Fine,

But can you compromise your security at the cost of convenience?

Definitely Not!

When it comes to accessing the internet through Mobile Web Browsers, one must understand the risks involved with using these browsers for accessing the websites that contain secure content.  
At the moment at least two web browsers Nokia’s OVI and Opera’s Mini browser for mobile devices are using their own proxy servers to decipher the secure communication transmitted over HTTPS protocol.
These browsers are pre-configured to send all the traffic to their own proxy servers instead of directly sending to the actual destination.

The secure content is stripped to make an examination and changing accordingly. All such companies claim there is no human intervention, access or involvement in inspection and alteration of content.

On the other hand it is a reality that all our secret information transmitted/received through such browsers is visible to one additional entity “The Browser Software Provider” and that is if mentioned in lengthy terms and conditions document somewhere, not a very healthy sign to our privacy.

What Is at Stake?

Personal information including account passwords and pin numbers are the most common examples and potentially most dangerous too.

Why Do They Need to Strip the HTTPS Traffic?


Mainly there are two reasons, 


  • To make the web page look more suitable to mobile phone’s smaller screen by re-organizing them
  • To share the work load on a compact browser by doing the rendering on application provider’s proxy servers.



How Do they Do That?

All such browsers are pre-configured to send all traffic to a certain set of proxy servers.
These servers receive the information, send to original website and receive from the server. Upon receiving the information, the secured bits are decrypted using the public key and adjusted to give user an acceptable browsing experience with limited usage of resources. Here they are doing something good for the user in a way but it has its cost in the form of an elevated data exposure risk.
Now question is, when HTTPS traffic is ripped off, why users are not getting any security certificate warning?
Since the browsers are configured to accept all certificates that contain their respective proxy server’s issued certificate so users do not receive a certificate warning.

How to Avoid this Issue?

  • Apparently if a website’s content is opening differently on your mobile device compared to laptop, it is using a man in the middle.
  • Use a full version instead of compact version wherever possible.
  • Never use mobile browsers to access Email and online bank account portals. Otherwise you have an extra hop which if compromised can never be held responsible for any loss, thanks to the privacy policy document containing 1 million words having a big “I AGREE” button which we press eagerly during installation.  
  • Consider using proxy services