Configure Enterprise Vault Server Driven PST migration

This article outlines the steps to perform a bulk import of the PST files to large number of mailboxes Archive in Enterprise Vault.

There are few methods to perform the server driven migration in enterprise vault and we will cover one option using the PST task controller services.

Prerequisites:

A csv file with below information needs to be prepared for feeding the data to the Enterprise vault personal store management.

Untitled

Untitled

Where –

UNCPath – path of the pst files. Better to keep them in the Enterprise Vault server which will speed up the migration.
Mailbox – Display Name of the Mailbox of this associated EV archive.
Archive – Display Name of this Archive.
Archive Type – Exchange Mailbox  since its  associated with Exchange mailbox.
Retention Category – Can choose based on requirement
Priority – Can choose based on requirement.
Language – Can choose based on requirement
Directory server – Choose the corresponding directory server.
Site Name – Choose the corresponding site name.

Once the csv file is ready , we need to import the data via personal store management, by choosing multiple and feeding the CSV file.

Untitled

Untitled

Once  imported we can see the summary of successfully imported CSV files.

If its unable to find any associated archives in the csv file it will give an error message only for them and we have an option to export them as csv files.

Untitled

After this import is successful we can see the list of successfully imported files with below information.

Untitled

Now we have provided the EV  with the required data to migrate to this associated archive. Now we need to create PST collector Task ,PST migrator task, PST Migration Policy.

After this we need to create PST holding folder by right clicking on EV site properties and specifying the location. This PST holding folder is a temp location used by EV to copy the actual PST files from the UNC path and perform the import.

This is done because when EV tries to import a pst and if its failed then that pst can no longer be used. After the migration is complete it will automatically delete these files based on the PST migration policy that we have configured.

Untitled

After this configure the PST migration policy –

We need to ignore the client driven settings here , because we are performing a server driven migration by providing the Pst files via csv file.

Untitled

There is option to set the post migration configuration of pst files. Its better not to use this option until the complete migration task is over and we get confirmation from end users.

Untitled

There is a very good option to send email notification post migration.

Untitled

After this we need to create PST collector Task

untitled13

This setting is very important to specify the maximum number of pst to be collected in the holding area. We can set this value based on our requirement.

Untitled

We should schedule the collector task schedule, probably after office hours.

Untitled

Configure the Migrator Task

once this is done we need to configure the pst migrator task

untitled16

we need to configure temporary file location for the pst file to start the migration

Untitled

Also we have the option of number of PSTs to migrate concurrent, which we  can increase based on our requirement. After the CSV is imported we can run the PST collector and migrator task which will start importing the psts to the associated EV accounts.

Also there is a file dashboard which will always help us to  check the current migration status.

Untitled

 

Very important – select the override password for password protected PST files in the personal store management. This will also migrate the password protected PST files. This option looks amazing.

untitled17

Tips :

  1. Make sure the EV service account is used to run the collector and migrator task.
  2. Make sure the EV service account has full access to the PST holding, collecting and Migrating shared drives. If this is not present the import, collection and migration will fail.
  3. Better not to perform any failovers of the node when the large import operation is happening.
  4. There is PST collector and PST migrator logs generated whenever this task runs and is located in the EV provising task location. This will give more information when any issues or road blocks in the migration
  5. If any of the provided PST files are password protected then they will not be migrated, unless we specify the override password protected files in the personal store management.
  6. Make sure you have enough sufficient free disk space in the PST collector and PST Migrator location.

Thanks & Regards
Sathish Veerapandian

Steps to renew the SSL Service Communication certificate in ADFS server

This article explains types of certificates present in ADFS server and  the steps to renew the SSL service communication certificate from ADFS server.

Basically there are 3 types of certificate required for ADFS certificate-

  1. Service Communication certificate – This certificate will be used for the secure communications between the web clients(web clients,federated servers,web application proxy and federated server proxy).The service communication certificate will be issued to the end users when they are redirected to the ADFS page by the application. Its always recommended to have a public SSL for this service communication certificate because it needs to be presented to the end users when redirected to ADFS page.
  2. Signing Certificates- Signing certificates will be used to sign the SAML token.When signed all the  data within the token will be readable in clear-text but when the consumer receives the token it knows that the token has not be tampered from the source. If it finds them to be tampered then it will not accept.Token Signing Can be done only with private portion which only the ADFS server will be having.
    This is the certificate used to sign only the SAML tokens.Token validation will be done with public portion of this certificate which will be available in the ADFS metadata. ADFS certificates will have one default self signed signing certificate which has validity of 1 year and this can be extended. Or we can generate one from internal CA and assign them.
  3. Token Decryption Certificate-  This certificate will be used when the application will be sending the encrypted tokens to the ADFS server.With this it will not sign the token but only encrypt the token. The application will encrypt the token by using the public part of the token decryption certificate. The ADFS server only will be having the private part of the key which it will be using to decrypt the token. ADFS certificates will have one default self signed token decryption certificate which has validity of 1 year and this can be extended. Or we can generate one from internal CA and assign them.

We can see the public certificate from the published ADFS  metadata.

Access the metadata url in browser. look for X509 that has values that ends with “=” sign.It’s base64 encoded so it will normally end with an “=” sign.

Testtr

We can see multiple x509 values. The public certificate is  base64 encoded so it will normally end with an “=” sign at the end like example below.

Testtr1

 

Once after we save them in .crt format we can see the public certificate which will be present in the ADFS metadata URL. So by using this the application will encrypt the token and send them to ADFS server. The ADFS server in turn can decry-pt this by using this certificate private key. This certificate private key will be present only with the ADFS server.Just in case if this private key is compromised then anybody can impersonate as your ADFS server

Testtr1

We can more or less verify the encryption on our own to get a better understanding of how it works.
When we do a SAML-trace in Firefox developer edition against a Relying Party we have with ADFS when we check the SAML-token, we will see that the saml:p response to the integrated service provider will be encrypted.

Below steps can be followed to renew the communication certificate

  1. Generate CSR from ADFS server. This can be done via IIS.
  2. Get the certificate issued from the public CA Portal.
  3. Once certificate is issued, add new certificate in Certificate store.
  4. Verify Private Key on the certificate. Make sure new certificate has the private key.
  5. Assign Permissions to the Private Key for ADFS service account. Right click on the certificate, click manage private keys, add ADFS service account and assign permissions as shown in below screenshot.

Untitled

6. From ADFS console select “Set Service Communication Certificate”

7.Select new certificate from prompted list of certificates.

To renew the SSL certificate for ADFS claims providers federation metadata URL can follow the previous article – https://exchangequery.com/2018/01/25/renew-ssl-certificate-for-adfs-url/

 

Create Cosmos DB , failover options,data replication options from azure subscription

This article outlines the steps to create Cosmos DB from the azure subscription.

  1. login to azure portal – Click on Azure Cosmos DB – Create Cosmos DB
  2. Type the document ID – keep this in mind this document ID is the URL we will be using  as connection string in the application.
  3. Select the preferred API according to your requirement.
  4. Choose the azure subscription, select the resource group
  5. Choose the primary location where the data needs to be replicated.There is an option to enable geo redundancy which can be done later as well.

Picture9

To Enable Geo-Redundancy-

Click on – Enable Geo Redundancy – and choose the  preferred region.

Picture10

Replicate data globally in few clicks –

Picture11

Failover options –

There are 2 failover options Manual and automatic.

Picture12
Manual can be triggered any time – we just need to select the disclaimer and initiate failover.

Picture13

Add new regions any time and replicate your data in few minutes-

Picture14

Failover options – Automatic

we need to go and enable the automatic failover as below

Picture15

Also there is an option to change the priorities of the failover in few clicks.Good part is can be done any time and we do not need to change them on the code.

Picture16

Consistency levels:

Can be modified and altered any time. The default consistency type is session as below.

Picture17

Network Security for the Database:

We have an option to access the database only from few subnets. This gives a complete security to the document. A VAPT can be initiated after setting up this security which eases the database admin job on considerations on data security.

Picture18

Endpoint and Keys to integrate with your code:

We need to use the URI and the primary key to integrate with the code. This can be seen by clicking on the keys section on the left side.

Picture19.png

Summary:

Now a Cosmos Database is created  –  now create new Collection-  create documents –  Then they are stored in JSON rows. Try to have most of the  documents under one collection, because the pay model is per collection.

Create collection:

Click on Add collection

Picture21.png

Create new Database ID – then Collection ID

Remember the collections in Cosmos DB are created in the below order.

Picture23

Now we need to choose the throughput and the storage capacity. They will be charged according to the selection. Also there is an option to choose unique keys which adds more data integrity.

Picture22

Example of a new document

Picture25

Better to define a document ID and collection ID.

Picture24

Once the above is done, we can connect via preferred available API  to your document and the developer do not need to worry about data schema , indexing, security.

More sample codes in GitHub:

https://github.com/Azure-Samples/azure-cosmos-db-documentdb-nodejs-getting-started

Example below:

Before you can run this sample, you must have the following prerequisites:

◦An active Azure Cosmos DB account.
◦Node.js version v0.10.29 or higher.
◦Git.

1) Clone the repository.

Picture26
2) Change Directories.
3) Substitute Endpoint with your primary key and endpoint.

Picture27
4) Run npm install in a terminal to install required npm modules
5) Run node app.js in a terminal to start your start your node application

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & Services.

Microsoft Cosmos DB features,options and summary

This article gives an introduction on Microsoft Cosmos DB, features available in them and options to integrate with application.

Introduction:

CosmosDB is the next Generation of Azure DB,its a enhanced version of document db.
Document DB customers, with their data, are automatically Azure Cosmos DB customers.
The transition is seamless and you now have access to all capabilities offered by Azure Cosmos DB.

Cosmos DB is a planet scale database.It is a good choice for any server less application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. They are more transparent to your application and the config does not need to change.

How It was derived:

Microsoft Cosmos DB isn’t entirely new: It grew out of a Microsoft development initiative called Project Florence that began in 2010. Project Florence is a speculative glimpse into our Future where both our Natural and Digital worlds could co-exist in harmony through enhanced communication.

Picture1

  • It was first commercialized in 2015 with the release of a NoSQL database called Azure DocumentDB
  • Cosmos DB was introduced in 2017.
  • Cosmos DB expands on it by adding multi-model support, global distribution capabilities and relational-like guarantees for latency, throughput, consistency and availability.

Why Cosmos DB?

  • It has no Data Scheme and schema-free. indexes all the data without requiring you to deal with schema and index management.
  • It’s also multi-model, natively supporting document, key-value, graph, and column-family data models.
  • Industry first Globally distributed, horizontally scalable, multi-model database service. Azure Cosmos DB guarantees single-digit-millisecond latencies at the 99th percentile anywhere in the world, offers multiple well-defined consistency models to fine-tune performance, and guarantees high availability.
  • No need to worry about instances, servers, CPU , Memory. Just select the throughput , required storage and create collections. CosmosDB works based only on throughputs. It has integrations with Azure functions. Serverless event driven solution.
  • API’s and Access Methods- Document DB API,Graph API (Gremlin),MongoDB API,RESTful HTTP API & Table API. This gives more flexibility to the developer.
  • They are elastic Globally scalable and with HA , Automatically indexes all your data.
  • 5 Consistency concepts – Bounded Staleness, Consistent Prefix,Session Consistency,Eventual Consistency,Immediate Consistency.  Application owner has now  more options to choose between consistency and performance.

Summary on Cosmos DB:

Picture2

Example without Cosmos DB:

  • Data Geo replication might be a challenge for the developer.
  • Users  from remote locations might experience latency and inconsistency in their data’s.
  • Providing an automatic failover is a real challenge.

Picture3.png

Example with Cosmos DB:

  • Data Can be Geo-Distributed in few Clicks.
  • Developer do not need to worry about the data replication.
  • Strong consistency can be given to the end users across geo-distributed location.
  • Web-Tier application can be  changed anytime between primary and secondary in few clicks.
  • Failover can be initiated any  time manually and automatic failover is present.

Picture5

Data Replication Methods:

  • Replicate Data with a single click – we can add/remove them by a single click.
  • Failover can be customized any time in few clicks(automatic/manual).
  • Application does not need to change.
  • Easily move web tier and it will automatically find the nearest DB.
  • Write/Read Regions can be modified any time.
  • New Regions can be added/removed any time.
  • Can be accessed with different API’s.

Existing data can be migrated:

  • For Example if we already have a mongo app we can just import and move them over.
  • Just copy the mongo data into the cosmos and replacing the URL in the code.
  • We can use Data migration Tool for the migration.

5 Consistency Types:

There are 5 consistency types where the developer can choose according to  the requirement.

  • Synchronous – eventual consistent End users get the best performance.(but data will not be consistent)
  • Strong – will only commit the database to the write/read regions after the copy is successful.(consistent data across all regions)
  • Bounded –  Option to set Bounded staleness to 2 hour. If it is set to 0 then it becomes strong consistency.(We can select few interval up to which the consistency can be strong till the replication is completed to read regions)
  • Session – It is synchronous but not consistent for all users. Clients who commits the data can see the fresh data.
  • Consistent Prefix – Copy of the data order will be maintained and they will see the uniform data.

Based on these 5 consistency concepts, the application developer can  decide to choose either to give the best performance or a consistent data to the end users.

Example of Eventual Replication:

The data is not consistent for the read region and users in write region alone can see the fresh data.

Picture6

Replicate Data with a single click:

Provides more regions to replicate just in few clicks which are more than Amazon and Google combined.

Picture7.png

Available API Methods:

Picture8

Recommendations from Microsoft:

  • According to Microsoft, Cosmos DB can be used for “any Web, mobile, gaming and IoT applications that need to handle massive amounts of reads and writes on a global scale with low response times.
  • ” However, Cosmos DB’s best use cases might be those that leverage event-driven Azure functions, which enable application code to be executed in a serverless environment.
  • Its not a relational database.Its not a SQL server not good at random joins . Does not matter what value of data it is as long as you don’t do joins.
  • Minimum is 400RU per collection, which would be around 25 USD / month. Each Collections are charged individually, even if they contain small amounts of data. Need to change your code to put all of documents into one collection.
  • It’s a “NoSQL” platform with SQL on top of it for SQL operations better not to do multiple joins.

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & services

Performing Veritas Enterprise Vault Upgrade for Exchange Environment

Upgrade on Veritas Enterprise vault will very according to the setup.

If its on a single node the upgrade will be easier.
If it is on Veritas Cluster the before the upgrade few factors needs to be taken  care.
If it is on Windows cluster the before the upgrade few factors needs to be taken  care.

In this article we will have a look at performing Enterprise Vault Upgrade on Windows server failover Cluster.
Also if we are upgrading from 11.x.x lot of things needs to be taken care because:

12  = Major release
12.X = Minor release
12.x.y = Maintenance release

Readiness before upgrading to EV version 12.0 from 11:
1) EV 12.x and above requires Windows Server 2012, hence if you are running older OS  version on the current EV version, you will have to migrate to a new server
2) EV 12 version supports Outlook 2016 on the server with the below conditions
If its Outlook 2016 exchange connection must be MAPI/HTTP and not RPC/HTTP
4) It Supports only SQL2012 and above. If we have SQL 2008 then we need to migrate to atleast SQL 2012

Note:
Enterprise Vault does not provide the high availability upgrades meaning we cannot perform the upgrade when the system is active and accessible via the passive node.The upgrades must be completed on all the nodes in the cluster before we start the Enterprise vault services after the upgrade. The system will be down and not accessible during the upgrade. So better to plan and perform this upgrade on a weekend.

Below things needs to be done prior to the upgrade:

  1. Stop all the task controller services. No Archive must be initiated or running. Stop all the jobs and make sure no jobs are running for any mailbox servers.
  2. Backup your Enterprise vault Server , data and the SQL stores.
  3. Clean the queues and the queues must be empty. There is a procedure to clean up the queues if the EV is running on Failover Cluster
  4. Unload the Antivirus on the EV nodes.
  5. Ensure no Backup , SQL, Nodes are happening during this time.
  6. Supported Outlook client on the EV server –

The Following Versions of Outlook Running on the Server are not supported:
Outlook 2013 SP1 64 bit version
Outlook 2013 Original Release
Outlook 2016 (64-bit version)

Only the below Versions of Outlook on the EV server are supported:

Outlook 2013 SP1 (32 bit version)
Outlook 2016 (32 bit windows installer, available with volume license)

If we need to upgrade the outlook on the EV nodes perform the following:

Stop the EV admin cluster resource service from the failover cluster.
Install the supported version of Outlook.
Restart all the EV Services.

Upgrade EV on a Windows Server Failover Cluster on EV Nodes:

Before we run the upgrade we need to run the deployment scanner to check required software and settings.
Inorder to perform that
Load the Media – run setup.exe – click enterprise vault – click server preparation

0Untitled

Untitled1

Once the deployment scanner is completed, if  the prerequisites are successful we can see the results like below.

01Untitled

If they are not successful we might get like below and need to correct steps as  mentioned in the report.

Untitled3

Logon to the Active Node with the vault service account and bring the admin service resource offline. If there are multiple sites make sure they are also stopped.

Load the media and run the setup. Make sure no MMC are open in the server.

Click – server installation and select upgrade existing server.

Select only EV services,Admin console, search access components, operations manager and reporting only if we have exchange integrated with EV.

Click install.

Untitle21

Once the setup is complete on the active node we will get screen like below . Better to restart after the upgrade  completes on all other nodes, SQL and indexes.

Untitled9

Steps to upgrade the DB (Directory, Monitoring & Audit):

Logon to the active node with vault service account.
Open the Enterprise Vault Managment Shell.
Run the command Start-EvDatabaseUpgrade -Verbose.
Once the upgrade is complete we can see dbupgrade subfolder where we can see the logs of the db upgrade and mandatorily they must be verified.

Once the DB is upgraded and the upgrade is completed on all the nodes, we can go to the failover cluster and bring EV Admin Server Resource online.

Additional Requirements based on setup:

  1. Upgrade the EV reporting component.
  2. Upgrade the MOM & SCOM management pack and delete the previous management packs.
  3. By Default EV deploys the Exchange server forms to users computer automatically.If the forms from organizational library is used then thhe Exchange server forms needs to be upgraded.

Thanks & Regards
Sathish Veerapandian

There has been an error installing the Enterprise Vault Cloud Storage Adapter Components

During the upgrade of the Enterprise Vault  which is in Windows cluster was getting the below error.

EVError

While looking into the event view in the application log on the affected node can see the below error message.

EVError

We can also see the below error message in the EV installation logs

Machine policy value ‘DisableUserInstalls’ is 0

Installation Log Folder can be seen in Ev installation directory with the format EVInstall.date.time.log

Solution:

Creating the registry below registry value will fix the issue :

HKEY_LOCAL_MACHINE\SOFTWARE\KVS\Enterprise Vault\CloudStoragePlugins\Install

What is this Cloud Storage Plugins ?

We can Use the Enterprise Vault Administration Console to enable and configure most of the cloud Storage Service as secondary storage for our store partitions.

To enable this – Open Vault Admin Console – Navigate to Store partition – properties

Select the store which we need to have secondary storage .

First we need to click on collections – select enterprise vault.

Then we need to click on migration – select migrate files

Caution: If you use secondary storage that is slow to respond, some Enterprise Vault operations that access this storage will take a long time. For example, both tape and cloud storage can be very slow. We get this warning as well during enabling this service.

CLS1

Then in Migrate files we can select on the cloud storage subscription we are having   and apply. Also there is an option to remove the collection files from primary storage after they have been migrated.

CLS2

Later in cloud storage service properties we can provide the service name, class, secure access ID and other options.

Thanks & Regards
Sathish Veerapandian

Inbox folder renamed to Archive

One of the user reported that the inbox folder was renamed to archive

There is one possibility of when the user has clicked on archive folder by accidentally by highlighted inbox – clicking on archive and then use create archive folder or choose existing folder.

 

 

while troubleshooting found this is a known issue and there is an article released from Microsoft

https://support.microsoft.com/en-us/help/2826855/folder-names-are-incorrect-or-displayed-in-an-incorrect-language-in-ou 

As per Microsoft this  can also occur if   a mobile device with different application like MDM  or a third-party server application synchronizes the Exchange Server mailbox.It could have been caused by a malfunctioning add-in.

If the default Inbox folder is changed unexpectedly to Archive we need to directly skip to step 4 and not look into step 1,2&3.

Use the step4  with MFCMapi tool to fix this problem.

Once the step4 is completed we need to run resetfolder as below

outlook.exe /resetfoldernames

After performing the step 4 we can run the below command and make sure the inbox folder in the root folder path is renamed correctly and not present as archive.

Get-MailboxFolderstatistics mbxname | select name,folderpath,foldersize,itemsinfolder

Thanks & Regards 

Sathish Veerapandian

%d bloggers like this: