Enable DLP for outgoing emails in Cisco Iron Port

Data Loss Prevention prevents the sensitive organization’s proprietary information by detecting before transit through ex-filtration transmissions and continuously monitors them to protect all types of data loss. The organizational data leak mostly happens when the end users unintentionally emailing sensitive data from our network which leads to Data leak Incidents.
There are many ways to achieve this and in this article we will look into how to prevent the data loss with the options present in Cisco Iron Port Email Gateway Solution.

Basically in any DLP there will be two actions involved :

Data Match: Where the DLP application scans the email body, header and attachments for the sensitive content created based on the DLP policy rules.

Action: Once any emails are identified to be sensitive, based on the DLP policy where it was blocked action types can be drop,quarantine or deliver with disclaimer and notify an admin or manager or recipient based on the policy and document classification.

Below are the steps to enable DLP on Cisco Iron Port-

Login to Cisco Iron Port – Select security Services – Click on Data Loss Prevention

DLP

By Default this option will be enabled – but now we need to creation DLP policies and action types based on our requirement.

Better to enable Content logging which will appear in message tracking and better in troubleshooting.

DLP1

In this example we will run through the DLP wizard which will have few popular policies which are common. Adding custom policies are very much possible via cisco ironport and there are more options to add custom.

An Example of enable matched content logging when DLP is enabled. This will help Admins to debug and find the reason why the email was blocked.

DLP2

There are more common used cases and in our example we can choose PCI-DSS which is most sensitive and must be enabled  especially for the Finance teams.

DLP3

Here we have an option to enable the DLP reports

DLP4

Once done in the outgoing mail policies will be configured for PCI-DSS we created.

DLP6

And in this policy we can edit and choose the inbuilt DLP  dictionaries based on our requirement.

DLP7

There is an option to add custom also.

DLP8

In Mail Policies there is an option to apply only for few users sent or in the recipient list.

DLP9

Options to add attachments is present

DLP10

The Severity settings can be altered below

DLP12

The severity scale can be altered based on the policy and our requirement

DLP13

Custom classifier can be added

DLP14

In the classifier we have an option to choose templates from dictionary and entity

DLP15

DLP16

Once Done based on the policy and action DLP will be working for outgoing emails.

Imp Notes:

    1. Before implementing DLP in any environment it requires lot of study in multi phase , closely working with security team and implementing purely based on the document classification.
    2. Need to understand how the sensitive data is currently handled by all the teams, identify the current risks. Post analysis the required action plan of creating policy and action must be done.
    3. End user awareness session is very important to deal with DLP. Advising to use more secure channels in Enterprise File Share DRMS solutions only for dealinig with sensitive documents for finance teams can be advised.
    4. Any DLP policies we create must have Audit and notify manager which will create awareness on employees and easier for tracking.

Thanks & Regards
Sathish Veerapandian

Microsoft Teams- Consult before transferring a call & HoloLens Remote assist

Calling in Teams is powered by Phone System (formerly known as Cloud PBX), the same service in Office 365 that enables PSTN calling capabilities in Skype for Business Online.

The Phone System feature set for Skype for Business is different from the Phone System feature set for Teams.Also With Direct Connect we can use our existing  PSTN Telephony system through an SBC . To connect the on premise SBC to Microsoft Teams a sip proxy is used to connect to  sip.pstnhub.microsoft.com.

Microsoft Teams have a new feature consult before transfer.

By using this option we can help the  wrong callers calling our extension to reach the right person.

This feature lets you quickly check in with another person via chat or audio call before transferring a call to them

Anyone with an Enterprise Voice license can do this, not just delegates! To try it, when you’re in a call, click More options (…) > Consult then transfer.

CBT

Call someone on a HoloLens –

Microsoft introduced the remote assist option for HoloLens users via Microsoft Teams.

Untitled

By using this option we can collaborate remotely to our Microsoft Teams Colleagues list. In remote assistance they can  perform reality annotations, we can show them  what we see , place arrows, draw lines and share images with our colleagues.

Prerequisites:

  1. This works from the Teams desktop app from Windows 10 PC.
  2. Need to have the remote assist app installed on the holo lens.

 

Email Security – Enable Sand Boxing ATP on Cisco Iron Port

Cisco Advanced malware protection uses Cisco Threat Intelligence Extensive latest threats and security trends Knowledge base Analytics and behavioral indicators which will help us to defend in latest spear phishing  and malware attacks.

This will basically fall under  advanced threat capability  category which is capable of providing additional layer of security.These ATP have retrospective detection alerts which is capable of tracking malware alerts which was successful through initial defenses.

AMP is the recent name given to this advanced threat detection by most of the security systems  where it has following:

  1. A separate private isolated environment where it has Implementations for multiple attack vectors/entry points (firewall, network, endpoint, email.
  2. Ransomware/Malware Threat prevention.
  3. Retrospective alerting and remediation techniques.

Usually AMP works in the following fashion for any email security system :

Preventive Measure – Strengthens the defense mechanism by having upto date latest malware attacks and defense mechanism from respective real time threat intelligence service.
Ironport uses Talos Engine – https://www.talosintelligence.com/
Using this technique the malicious content will be blocked.

Threat Analysis in Transit of Emails – During this process the file is analyzed as an end user PC(windows/MAC) in a isolated network to detect malware, experience file behavior and mark threat level if at all detected. If the sand boxing is not enabled in local on premise them it captures the fingerprint of each file which hits the gateway and will send them to their AMP cloud based intelligence network. Here we have an option to select which types of files that needs to be analysed via this AMP in most of the gateways.

Tracking after Delivery- In this step it uses continuous analysis which will help to identify if there are any malicious file which are capable of performing any malware attacks after certain period of time. By using this AMP will be able to find the infected source and then alert the admin and visibility till the infected file.

In this article we will have how to enable AMP in cisco ironport.

Login to the  appliance –  Navigate to security services – Advanced Malware protection – Select File reputation and analysis.

ip1

If its enabled we will be getting the below screen. To further fine tune the settings click on edit global settings

ip2

Click on – Enable file reputation.

ip3

This is used to protect against zero-day and targeted file-based threats.

Following actions are performed After a file’s reputation is evaluated:
• If the file is known to the file reputation service and is determined to be clean, the file is released to the end user.
• If the file reputation service returns a verdict of malicious, then the appliance applies the action that we have specified for such files.

We have Enable File Analysis-

This needs to be enabled. We have almost for all the attachment types.

ip4

ip5

ip6

File Analysis works in coordination with File reputation filtering. When this option is enabled attachments in emails will be sent to file analysis. Here we have the option to choose the file types which we need to perform the analysis. Be very choosy in this section keep in mind that since there is analysis enabled on this file it will take little few minutes to deliver the mail to end user when compared to a user who does not have AMP enabled for their account.

If the file is sent for analysis TO SANDBOXING (cloud or onprem based on setup):
• If the Selected file type is sent to the cloud for analysis: Files are sent over HTTPS.
Also the appliance generates an identifier for each file using a Secure
Hash Algorithm (SHA-256)
•Usually Analysis normally takes minutes, but may take longer based on the size and file type.
• Results for files analyzed using an on premises Cisco AMP Threat Grid appliance are cached locally

Advanced settings for file reputation –  Here we need to select our Sand boxing environment based on our configuration. If we are using cloud AMP then we have 4 regions to select based on our requirement.

ip7

There is an option  to register appliance with AMP for endpoints.Make sure you have a user account in AMP for Endpoints console with admin access rights. For more details on how to create an AMP for Endpoints console user account, contact Cisco TAC.

ip71

If we have local on premise AMP setup then we need to select option private reputation cloud and add the required details.

ip8

We have the same option cloud or on prem for file analysis

If specifying the cisco cloud server, choose the server that is physically nearest to your
appliance. Newly available servers will be added to this list periodically using standard
update processes

ip9

If we choose our own private cloud then we need to  use the self signed cert or  upload one certificate.This is required for encrypted communications between this appliance and yourprivate cloud appliance. This must be the same certificate used by the private cloudserver. I prefer to have one SHA256,2048 bit certificate generated from internal CA and apply them on the private cloud as well as the appliance for this connection alone.

Untitled

This settings is optional which we can leave as it is or if you want to configure the cache expiry period for File Reputation disposition values.

ip10

Once enabled the files enabled in AMP will be passed to them after antivirus engine.

We can see the files blocked in the AMP in the incoming mail dashboard.

Untitled1

Imp Notes:

  1. An AMP subscription is required to enable this functionality.
  2. Advanced Malware Protection services require network communication to the cloud servers on port 443 (for File Reputation) and 443 (for File Analysis). If there is no communication  the file types enabled for AMP will be sent to quarantine folder even if they are clean. Below error message will be received if no communication is present to cloud server in incoming  email header.

Untitled

Thanks & Regards
Sathish Veerapandian

Error – loading Microsoft Teams Modern authentication failed here status code caa20004

After enabling Microsoft Teams in a federated setup with ADFS ,we might get this error when on premise users try to login to Microsoft Teams for the first time.

WhatsApp Image 2018-05-30 at 21.05.12

Even on the client logs in the below location we can see the below message-

C:\Users\username\AppData\Roaming\Microsoft\Teams

Wed May 30 2018 06:51:54 GMT+0400 (Arabian Standard Time) <7092> — warning — SSO: ssoerr – (status) Unable to get errCode. Err:Error: ADAL error: 0xCAA10001SSO: ssoerr – (status) Unable to get errorDesc. Err:Error: ADAL error: 0xCAA10001

Wed May 30 2018 06:51:54 GMT+0400 (Arabian Standard Time) <7092> — event — Microsoft_ADAL_api_id: 13, Microsoft_ADAL_correlationId: 2c46e41d-ef75-49ed-b277-cfd61427b273, Microsoft_ADAL_response_rtime: 2, Microsoft_ADAL_api_error_code: caa10001,

There is also Get logs  option that can be opened with the below option  when this issue occurred from the Teams icon as shown below –

Untitled

When the issue occurs we would be able to see the error message regarding  unable to get  ADAL access token in the get logs.

Untitled2

In the below example since its a successful login it shows as success after getting the access token.

Untitled3

There is an option to download MS-Teams Diagnostics logs as well by using the below key combination and here we go we get the Ms Teams Diagnostics logs

Ctrl + Shift + Alt + 1

12

 

while looking through this diagnostics logs it has lot of info like client version, computer name, memory , user ID and we can look only for an information that we are  currently facing, since understanding this logs  would be  really difficult.

Untitled4

Below is an example of getting successful access token.

Untitled5

 

Any Azure AD dependent apps like Microsoft teams they will have an optimized path for the first time login process to login with WS-Trust kerberos authentication endpoints of ADFS.If the above first attempt is not successful then the client will try to perform an interactive login session which is presented as web browser dialog.

But the new office and ADAL clients will first try only WS-Trust 1.3 version of the endpoint for windows integrated authentication which is not enabled by default.

Solution:

Enable WS-Trust 1.3 for Desktop Client SSO on the onprem ADFS server which has a federated setup with Azure AD tenant by running the below command.

Enable-AdfsEndpoint -TargetAddressPath “/adfs/services/trust/13/windowstransport”

We also want to ensure that we have both Forms and Windows Authentication (WIA) enabled in our global authentication policies.

Untitled5

Storage Explorer in Azure portal and its options

The Storage explorer desktop tool is available now in the azure storage accounts section in azure portal.

blob1

 

From here we have options to manage,create Blob Containers, File shares and queues

New Blob Containers can be created deleted managed –

 

blob6

Further we can upload and delete blobs

blob9

we can further drill down and manage properties

10

These are the options variable in the properties

11

Same way the file-share can be created deleted and managed

Also we have an option to upload files, connect to VM and download from here.

blob7

The Storage Queues also can be created and managed

There is option to add message,de queue and clear the queue,.

blob8

Below is the small summary on azure storage accounts blobs, file shares, and queues.

What is Azure Blob Storage?

Azure blob storage is Microsoft objects storage solution.
This storage type is enhanced to store large amount of unstructured data like text or binary.
The items stored on blob storage can be accessed from anywhere in the world via http/https. This can be invoked through azure functions (cli,powershell,etc..,) and libraries are available for multiple languages.

Once created they have a service end point like below.This will be the connection string that can be used in our API’s to access the data in the azure storage account.

blob91.png

There are 3 types of blobs-

Block Blobs – Can be used to store data of types text and binary.It supports data to store up to 4.7 TB. They store data in blocks type and these data can be managed individually.

Append Blobs – They are similar like block blocks except they are enhanced for append operations. This is best suited for recurring tasks operations example like logging data from virtual machines.

Page Blobs – The data are stored and accessed randomly in page blocks and data can be stored up to 8 TB in size.

So the blobs are stored in below order

Storage Account – Containers – Blobs

A storage account can hold multiple containers and a containers in turn can hold unlimited blobs in them.

What is Azure File Storage?
This is a service from azure through which we can create a fileshare in the azure cloud using the standard Server message block (SMB) protocol. This option will be really useful for migrating local fileshares to azure fastly with very minimal cost.

Once the file storage is created we will have the connection string like below

We can use them to connect to either to windows or linux.

blob92.png

The connection string will have the username and password also.

blob93

Since its a SMB it uses port 445, so make sure the port 445 is opened in your local network firewall.We will not be able to connect if port 445 is not allowed from your local network.

What is Azure Storage Queue Service?

This is a service offered by azure where we can store large volumes of messages and they can be accessed from anywhere in the world via http/https. A single message can go up to 64 KB in size. Using this we can provide persistent messaging within and between services. Using this we can store unlimited messages even in same queue.

Once created we will get the end point like below.REST-based operation  can be initiated  for GET/PUT/PEEK operations.

blob94

 

 

 

Enable Azure DDOS Protection and its features

In Azure we can enable the DDOS protection easily in few clicks for our applications running and deployed in Azure Virtual networks.

Using this we can protect the resources in a virtual network and its published end points including public IP address. When it is integrated with application gateway web application firewall, DDOS protection standard can provide full layer 3 to 7 protection.

There are 2 types of service Tier:

Basic-

The basic protection is enabled by default.This provides protection against common network layer attacks through Always on traffic monitoring and real time mitigation.

Basic.png

Standard-

Standard protection is a paid premium service. This has a dedicated monitoring,machine learning and configures DDOS protection to this virtual network. So when enabled applications traffic patterns are enabled and by this it will be able to detect the malicious traffic in a smart way. We can switch between any one of these option in our virtual networks in few clicks.

DDOS9

And then we can click on the standard plan.

DDOS10

This also  provides attack telemetry views through Azure Monitor, enabling alerting when your application is under attack. Integrated Layer 7 application protection can be provided by Application Gateway WAF.

This also provides views of attack in Azure Monitor, Alerting can be enabled when application is under attack. Also Layer 7 application protection can be done by integrating with Azure Web Application Firewall (WAF).

This Standard feature is integrated with Virtual networks and will provide protection for Azure application service end points from DDOS attacks. IT also has alerting, telemetry features which is not present in the basic DDOS protection plan which comes at free of cost.

First we need to create a DDOS protection plan if we need to use the standard feature.

Navigate to Azure Portal – Click on Create DDOS protection Plan

DDOS2

Type Name – Choose Subscription – Select resource Group and choose the location.

DDOS3

Once it is done the deployment will be successful

DDOS5

We have automation option during this deployment

DDOS18

After its deployed when we go to the  DDOS resource we can see the below options in them.

Activity Log – 

This is more of like a Audit log which explains on modifying the resources in the subscription.
There are also few options which tells us about the status of the operation and other properties. But this logs will not have any get operations happening in the resources.

There is an option to filter per resource- resource type and operation.

DDOS19

we have an option to filter them via category , severity and initiated by

DDOS20

Access Control(IAM)-

we can view who has access to the resource and add  new access to the resource and also remove them.
DDOS21

Tags- 

This approach is helpful when we need to organize our resources for billing or management. Tags can be applied to resource groups or resources directly
This retrieves all the resources in our subscription with that tag name and value. Usually helpful in tracking for billing purposes.

Tags1

Tags support only resources deployed through resource manager and does not support resources deployed through classic model.

By default the resource group will not have tags assigned to them. We can assign to to them by running below command.

Tags

Locks – 

Management locks helps us prevent accidental deletion or modification of our Azure resources. we can manage these locks from within the Azure portal.

locks

As an administrator, we might need to lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources.

There are 2 types of lock levels-

Delete(CanNotDelete) –
Authorized users would be able to read and modify a resource, but they will not be able to delete any resources.

ReadOnly-
Users can only read but they will not be able to modify and delete any resources.

locks1

Metrics – 

Allows us to monitor the health, performance, availability and usage of our services.

metrics

Thanks & Regards
Sathish Veerapandian

Configure Enterprise Vault Server Driven PST migration

This article outlines the steps to perform a bulk import of the PST files to large number of mailboxes Archive in Enterprise Vault.

There are few methods to perform the server driven migration in enterprise vault and we will cover one option using the PST task controller services.

Prerequisites:

A csv file with below information needs to be prepared for feeding the data to the Enterprise vault personal store management.

Untitled

Untitled

Where –

UNCPath – path of the pst files. Better to keep them in the Enterprise Vault server which will speed up the migration.
Mailbox – Display Name of the Mailbox of this associated EV archive.
Archive – Display Name of this Archive.
Archive Type – Exchange Mailbox  since its  associated with Exchange mailbox.
Retention Category – Can choose based on requirement
Priority – Can choose based on requirement.
Language – Can choose based on requirement
Directory server – Choose the corresponding directory server.
Site Name – Choose the corresponding site name.

Once the csv file is ready , we need to import the data via personal store management, by choosing multiple and feeding the CSV file.

Untitled

Untitled

Once  imported we can see the summary of successfully imported CSV files.

If its unable to find any associated archives in the csv file it will give an error message only for them and we have an option to export them as csv files.

Untitled

After this import is successful we can see the list of successfully imported files with below information.

Untitled

Now we have provided the EV  with the required data to migrate to this associated archive. Now we need to create PST collector Task ,PST migrator task, PST Migration Policy.

After this we need to create PST holding folder by right clicking on EV site properties and specifying the location. This PST holding folder is a temp location used by EV to copy the actual PST files from the UNC path and perform the import.

This is done because when EV tries to import a pst and if its failed then that pst can no longer be used. After the migration is complete it will automatically delete these files based on the PST migration policy that we have configured.

Untitled

After this configure the PST migration policy –

We need to ignore the client driven settings here , because we are performing a server driven migration by providing the Pst files via csv file.

Untitled

There is option to set the post migration configuration of pst files. Its better not to use this option until the complete migration task is over and we get confirmation from end users.

Untitled

There is a very good option to send email notification post migration.

Untitled

After this we need to create PST collector Task

untitled13

This setting is very important to specify the maximum number of pst to be collected in the holding area. We can set this value based on our requirement.

Untitled

We should schedule the collector task schedule, probably after office hours.

Untitled

Configure the Migrator Task

once this is done we need to configure the pst migrator task

untitled16

we need to configure temporary file location for the pst file to start the migration

Untitled

Also we have the option of number of PSTs to migrate concurrent, which we  can increase based on our requirement. After the CSV is imported we can run the PST collector and migrator task which will start importing the psts to the associated EV accounts.

Also there is a file dashboard which will always help us to  check the current migration status.

Untitled

 

Very important – select the override password for password protected PST files in the personal store management. This will also migrate the password protected PST files. This option looks amazing.

untitled17

Tips :

  1. Make sure the EV service account is used to run the collector and migrator task.
  2. Make sure the EV service account has full access to the PST holding, collecting and Migrating shared drives. If this is not present the import, collection and migration will fail.
  3. Better not to perform any failovers of the node when the large import operation is happening.
  4. There is PST collector and PST migrator logs generated whenever this task runs and is located in the EV provising task location. This will give more information when any issues or road blocks in the migration
  5. If any of the provided PST files are password protected then they will not be migrated, unless we specify the override password protected files in the personal store management.
  6. Make sure you have enough sufficient free disk space in the PST collector and PST Migrator location.

Thanks & Regards
Sathish Veerapandian

%d bloggers like this: