Create Cosmos DB , failover options,data replication options from azure subscription

This article outlines the steps to create Cosmos DB from the azure subscription.

  1. login to azure portal – Click on Azure Cosmos DB – Create Cosmos DB
  2. Type the document ID – keep this in mind this document ID is the URL we will be using  as connection string in the application.
  3. Select the preferred API according to your requirement.
  4. Choose the azure subscription, select the resource group
  5. Choose the primary location where the data needs to be replicated.There is an option to enable geo redundancy which can be done later as well.

Picture9

To Enable Geo-Redundancy-

Click on – Enable Geo Redundancy – and choose the  preferred region.

Picture10

Replicate data globally in few clicks –

Picture11

Failover options –

There are 2 failover options Manual and automatic.

Picture12
Manual can be triggered any time – we just need to select the disclaimer and initiate failover.

Picture13

Add new regions any time and replicate your data in few minutes-

Picture14

Failover options – Automatic

we need to go and enable the automatic failover as below

Picture15

Also there is an option to change the priorities of the failover in few clicks.Good part is can be done any time and we do not need to change them on the code.

Picture16

Consistency levels:

Can be modified and altered any time. The default consistency type is session as below.

Picture17

Network Security for the Database:

We have an option to access the database only from few subnets. This gives a complete security to the document. A VAPT can be initiated after setting up this security which eases the database admin job on considerations on data security.

Picture18

Endpoint and Keys to integrate with your code:

We need to use the URI and the primary key to integrate with the code. This can be seen by clicking on the keys section on the left side.

Picture19.png

Summary:

Now a Cosmos Database is created  –  now create new Collection-  create documents –  Then they are stored in JSON rows. Try to have most of the  documents under one collection, because the pay model is per collection.

Create collection:

Click on Add collection

Picture21.png

Create new Database ID – then Collection ID

Remember the collections in Cosmos DB are created in the below order.

Picture23

Now we need to choose the throughput and the storage capacity. They will be charged according to the selection. Also there is an option to choose unique keys which adds more data integrity.

Picture22

Example of a new document

Picture25

Better to define a document ID and collection ID.

Picture24

Once the above is done, we can connect via preferred available API  to your document and the developer do not need to worry about data schema , indexing, security.

More sample codes in GitHub:

https://github.com/Azure-Samples/azure-cosmos-db-documentdb-nodejs-getting-started

Example below:

Before you can run this sample, you must have the following prerequisites:

◦An active Azure Cosmos DB account.
◦Node.js version v0.10.29 or higher.
◦Git.

1) Clone the repository.

Picture26
2) Change Directories.
3) Substitute Endpoint with your primary key and endpoint.

Picture27
4) Run npm install in a terminal to install required npm modules
5) Run node app.js in a terminal to start your start your node application

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & Services.

Microsoft Cosmos DB features,options and summary

This article gives an introduction on Microsoft Cosmos DB, features available in them and options to integrate with application.

Introduction:

CosmosDB is the next Generation of Azure DB,its a enhanced version of document db.
Document DB customers, with their data, are automatically Azure Cosmos DB customers.
The transition is seamless and you now have access to all capabilities offered by Azure Cosmos DB.

Cosmos DB is a planet scale database.It is a good choice for any server less application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. They are more transparent to your application and the config does not need to change.

How It was derived:

Microsoft Cosmos DB isn’t entirely new: It grew out of a Microsoft development initiative called Project Florence that began in 2010. Project Florence is a speculative glimpse into our Future where both our Natural and Digital worlds could co-exist in harmony through enhanced communication.

Picture1

  • It was first commercialized in 2015 with the release of a NoSQL database called Azure DocumentDB
  • Cosmos DB was introduced in 2017.
  • Cosmos DB expands on it by adding multi-model support, global distribution capabilities and relational-like guarantees for latency, throughput, consistency and availability.

Why Cosmos DB?

  • It has no Data Scheme and schema-free. indexes all the data without requiring you to deal with schema and index management.
  • It’s also multi-model, natively supporting document, key-value, graph, and column-family data models.
  • Industry first Globally distributed, horizontally scalable, multi-model database service. Azure Cosmos DB guarantees single-digit-millisecond latencies at the 99th percentile anywhere in the world, offers multiple well-defined consistency models to fine-tune performance, and guarantees high availability.
  • No need to worry about instances, servers, CPU , Memory. Just select the throughput , required storage and create collections. CosmosDB works based only on throughputs. It has integrations with Azure functions. Serverless event driven solution.
  • API’s and Access Methods- Document DB API,Graph API (Gremlin),MongoDB API,RESTful HTTP API & Table API. This gives more flexibility to the developer.
  • They are elastic Globally scalable and with HA , Automatically indexes all your data.
  • 5 Consistency concepts – Bounded Staleness, Consistent Prefix,Session Consistency,Eventual Consistency,Immediate Consistency.  Application owner has now  more options to choose between consistency and performance.

Summary on Cosmos DB:

Picture2

Example without Cosmos DB:

  • Data Geo replication might be a challenge for the developer.
  • Users  from remote locations might experience latency and inconsistency in their data’s.
  • Providing an automatic failover is a real challenge.

Picture3.png

Example with Cosmos DB:

  • Data Can be Geo-Distributed in few Clicks.
  • Developer do not need to worry about the data replication.
  • Strong consistency can be given to the end users across geo-distributed location.
  • Web-Tier application can be  changed anytime between primary and secondary in few clicks.
  • Failover can be initiated any  time manually and automatic failover is present.

Picture5

Data Replication Methods:

  • Replicate Data with a single click – we can add/remove them by a single click.
  • Failover can be customized any time in few clicks(automatic/manual).
  • Application does not need to change.
  • Easily move web tier and it will automatically find the nearest DB.
  • Write/Read Regions can be modified any time.
  • New Regions can be added/removed any time.
  • Can be accessed with different API’s.

Existing data can be migrated:

  • For Example if we already have a mongo app we can just import and move them over.
  • Just copy the mongo data into the cosmos and replacing the URL in the code.
  • We can use Data migration Tool for the migration.

5 Consistency Types:

There are 5 consistency types where the developer can choose according to  the requirement.

  • Synchronous – eventual consistent End users get the best performance.(but data will not be consistent)
  • Strong – will only commit the database to the write/read regions after the copy is successful.(consistent data across all regions)
  • Bounded –  Option to set Bounded staleness to 2 hour. If it is set to 0 then it becomes strong consistency.(We can select few interval up to which the consistency can be strong till the replication is completed to read regions)
  • Session – It is synchronous but not consistent for all users. Clients who commits the data can see the fresh data.
  • Consistent Prefix – Copy of the data order will be maintained and they will see the uniform data.

Based on these 5 consistency concepts, the application developer can  decide to choose either to give the best performance or a consistent data to the end users.

Example of Eventual Replication:

The data is not consistent for the read region and users in write region alone can see the fresh data.

Picture6

Replicate Data with a single click:

Provides more regions to replicate just in few clicks which are more than Amazon and Google combined.

Picture7.png

Available API Methods:

Picture8

Recommendations from Microsoft:

  • According to Microsoft, Cosmos DB can be used for “any Web, mobile, gaming and IoT applications that need to handle massive amounts of reads and writes on a global scale with low response times.
  • ” However, Cosmos DB’s best use cases might be those that leverage event-driven Azure functions, which enable application code to be executed in a serverless environment.
  • Its not a relational database.Its not a SQL server not good at random joins . Does not matter what value of data it is as long as you don’t do joins.
  • Minimum is 400RU per collection, which would be around 25 USD / month. Each Collections are charged individually, even if they contain small amounts of data. Need to change your code to put all of documents into one collection.
  • It’s a “NoSQL” platform with SQL on top of it for SQL operations better not to do multiple joins.

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & services

Performing Veritas Enterprise Vault Upgrade for Exchange Environment

Upgrade on Veritas Enterprise vault will very according to the setup.

If its on a single node the upgrade will be easier.
If it is on Veritas Cluster the before the upgrade few factors needs to be taken  care.
If it is on Windows cluster the before the upgrade few factors needs to be taken  care.

In this article we will have a look at performing Enterprise Vault Upgrade on Windows server failover Cluster.
Also if we are upgrading from 11.x.x lot of things needs to be taken care because:

12  = Major release
12.X = Minor release
12.x.y = Maintenance release

Readiness before upgrading to EV version 12.0 from 11:
1) EV 12.x and above requires Windows Server 2012, hence if you are running older OS  version on the current EV version, you will have to migrate to a new server
2) EV 12 version supports Outlook 2016 on the server with the below conditions
If its Outlook 2016 exchange connection must be MAPI/HTTP and not RPC/HTTP
4) It Supports only SQL2012 and above. If we have SQL 2008 then we need to migrate to atleast SQL 2012

Note:
Enterprise Vault does not provide the high availability upgrades meaning we cannot perform the upgrade when the system is active and accessible via the passive node.The upgrades must be completed on all the nodes in the cluster before we start the Enterprise vault services after the upgrade. The system will be down and not accessible during the upgrade. So better to plan and perform this upgrade on a weekend.

Below things needs to be done prior to the upgrade:

  1. Stop all the task controller services. No Archive must be initiated or running. Stop all the jobs and make sure no jobs are running for any mailbox servers.
  2. Backup your Enterprise vault Server , data and the SQL stores.
  3. Clean the queues and the queues must be empty. There is a procedure to clean up the queues if the EV is running on Failover Cluster
  4. Unload the Antivirus on the EV nodes.
  5. Ensure no Backup , SQL, Nodes are happening during this time.
  6. Supported Outlook client on the EV server –

The Following Versions of Outlook Running on the Server are not supported:
Outlook 2013 SP1 64 bit version
Outlook 2013 Original Release
Outlook 2016 (64-bit version)

Only the below Versions of Outlook on the EV server are supported:

Outlook 2013 SP1 (32 bit version)
Outlook 2016 (32 bit windows installer, available with volume license)

If we need to upgrade the outlook on the EV nodes perform the following:

Stop the EV admin cluster resource service from the failover cluster.
Install the supported version of Outlook.
Restart all the EV Services.

Upgrade EV on a Windows Server Failover Cluster on EV Nodes:

Before we run the upgrade we need to run the deployment scanner to check required software and settings.
Inorder to perform that
Load the Media – run setup.exe – click enterprise vault – click server preparation

0Untitled

Untitled1

Once the deployment scanner is completed, if  the prerequisites are successful we can see the results like below.

01Untitled

If they are not successful we might get like below and need to correct steps as  mentioned in the report.

Untitled3

Logon to the Active Node with the vault service account and bring the admin service resource offline. If there are multiple sites make sure they are also stopped.

Load the media and run the setup. Make sure no MMC are open in the server.

Click – server installation and select upgrade existing server.

Select only EV services,Admin console, search access components, operations manager and reporting only if we have exchange integrated with EV.

Click install.

Untitle21

Once the setup is complete on the active node we will get screen like below . Better to restart after the upgrade  completes on all other nodes, SQL and indexes.

Untitled9

Steps to upgrade the DB (Directory, Monitoring & Audit):

Logon to the active node with vault service account.
Open the Enterprise Vault Managment Shell.
Run the command Start-EvDatabaseUpgrade -Verbose.
Once the upgrade is complete we can see dbupgrade subfolder where we can see the logs of the db upgrade and mandatorily they must be verified.

Once the DB is upgraded and the upgrade is completed on all the nodes, we can go to the failover cluster and bring EV Admin Server Resource online.

Additional Requirements based on setup:

  1. Upgrade the EV reporting component.
  2. Upgrade the MOM & SCOM management pack and delete the previous management packs.
  3. By Default EV deploys the Exchange server forms to users computer automatically.If the forms from organizational library is used then thhe Exchange server forms needs to be upgraded.

Thanks & Regards
Sathish Veerapandian

There has been an error installing the Enterprise Vault Cloud Storage Adapter Components

During the upgrade of the Enterprise Vault  which is in Windows cluster was getting the below error.

EVError

While looking into the event view in the application log on the affected node can see the below error message.

EVError

We can also see the below error message in the EV installation logs

Machine policy value ‘DisableUserInstalls’ is 0

Installation Log Folder can be seen in Ev installation directory with the format EVInstall.date.time.log

Solution:

Creating the registry below registry value will fix the issue :

HKEY_LOCAL_MACHINE\SOFTWARE\KVS\Enterprise Vault\CloudStoragePlugins\Install

What is this Cloud Storage Plugins ?

We can Use the Enterprise Vault Administration Console to enable and configure most of the cloud Storage Service as secondary storage for our store partitions.

To enable this – Open Vault Admin Console – Navigate to Store partition – properties

Select the store which we need to have secondary storage .

First we need to click on collections – select enterprise vault.

Then we need to click on migration – select migrate files

Caution: If you use secondary storage that is slow to respond, some Enterprise Vault operations that access this storage will take a long time. For example, both tape and cloud storage can be very slow. We get this warning as well during enabling this service.

CLS1

Then in Migrate files we can select on the cloud storage subscription we are having   and apply. Also there is an option to remove the collection files from primary storage after they have been migrated.

CLS2

Later in cloud storage service properties we can provide the service name, class, secure access ID and other options.

Thanks & Regards
Sathish Veerapandian

Inbox folder renamed to Archive

One of the user reported that the inbox folder was renamed to archive

There is one possibility of when the user has clicked on archive folder by accidentally by highlighted inbox – clicking on archive and then use create archive folder or choose existing folder.

 

 

while troubleshooting found this is a known issue and there is an article released from Microsoft

https://support.microsoft.com/en-us/help/2826855/folder-names-are-incorrect-or-displayed-in-an-incorrect-language-in-ou 

As per Microsoft this  can also occur if   a mobile device with different application like MDM  or a third-party server application synchronizes the Exchange Server mailbox.It could have been caused by a malfunctioning add-in.

If the default Inbox folder is changed unexpectedly to Archive we need to directly skip to step 4 and not look into step 1,2&3.

Use the step4  with MFCMapi tool to fix this problem.

Once the step4 is completed we need to run resetfolder as below

outlook.exe /resetfoldernames

After performing the step 4 we can run the below command and make sure the inbox folder in the root folder path is renamed correctly and not present as archive.

Get-MailboxFolderstatistics mbxname | select name,folderpath,foldersize,itemsinfolder

Thanks & Regards 

Sathish Veerapandian

Update NTP server in Linux Application

We use the NTP protocol to sync the time of servers,network devices, client PC’s with our local time zone to keep the correct time over the network. This can be accomplished through a NTP server configured locally in our network which will have the capability to receive and update the local time from the satellites in space.

The time which they get updated will be set as a bench mark over the entire machines over its network if this machine is configured as NTP server for them. This article focusses on updating the local NTP server on linux application.

To See Current Date –

Putty ssh to the server and run –    date

To check the ntp service status run –   service ntpd status

NTP1

To set your as NTP server  to get up to date time from them  run below  –

ntpdate ntpserverfqdn

Example – ntpdate ntp.exchangequery.local

Once after we updated we will get the below message

ntpdate Step time server offset sec
ntpdate adjust time server offset sec

To sync hardware clock –

hwclock –systohc

Reason to run above command: There are 2 types of clocks in Linux Operating systems.

1) Hardware clock – is the battery powered “Real Time Clock” (also known as the “RTC”, “CMOS clock”) which keeps track of time when the system is turned off but is not used when the system is running.

2) System clock-  (sometimes called the “kernel clock” or “software clock”) which is a software counter based on the timer interrupt.

This  above command will Set the Hardware Clock to the current System Time which will update from the local ntp server in our environment.

Note: We have an option to set the Hardware Clock from the System Time, or set the System Time from the hardware Clock.

Finally we need to Add the NTP server in the ini.config

Navigate via VI to ntp.conf location –  vi /etc/ntp.conf

vi /etc/sysconfig/ntpdate

NTP

Finally restart the ntp service –

service ntpd  restart

There is another option to update the servers from the website ntp.pool.org

We can go to this official ntp pool site and choose our continent area servers.

In order to update them VI to the ntp location :

vi /etc/ntp.conf

We can see the default ntp servers like below. We can comment them and need to update with the correct servers for the respective country where the server is hosted.

NTP2

In my example updating with my local time zone as below and commenting the default ones.

NTP3

After the above is completed the servers will be updated.

We can check the ntp peers synchronization with the below command

ntpq -p

Based on our requirement we can set the ntp server to be our local ntp server or one from the local time zone and after this the linux server will be updated with the latest current local time zone.

Thanks & Regards
Sathish Veerapandian

Renew SSL certificate for ADFS URL

This document outlines the steps to renew the SSL certificate for ADFS claims providers federation metadata URL

1) To take the application ID and the certificate hash run the below command.

netsh http show sslcert

ADFS1

copy only application id value. This we require for the certificate renewal. Better to take a copy of this results.

2) Run this command to see the ADFS listners

netsh http show urlacl 

ADFS2

This is just to take a copy of the ACL url’s before the certificate renewal. This part is so sensitive because ADFS will have some URL reservations in the HTTP.SYS. This will help us just in case if we face any issues after the certificate renewal.

3) Delete the old certificates –

$Command = “http delete sslcert hostnameport=adfs.exchangequery.com:443”
$Command | netsh

$Command = “http delete sslcert hostnameport=adfs.exchangequery.com:49443”
$Command | netsh

$Command = “http delete sslcert hostnameport=localhost:443”
$Command | netsh

$Command = “http delete sslcert hostnameport=EnterpriseRegistration.exchangequery.com:443”
$Command | netsh

4) Delete the old hostIP and port entries:

$Command = “http delete sslcert hostnameport=0.0.0.0:443”
$Command | netsh

5) Now we can add the new certificates:

Prerequisite:

Take the APP id which was noted down in the step 1

Take the certificate Hash – This can be taken from the new certificate thumbprint

example below –  remove all the spaces and copy the new certificate hash value.

ADFS3

# APP ID
$guid = “paste the appid here”

# Cert Hash
$certhash = “paste the certificatethumbprint”

To renew actual metadata URL:

$hostnameport = “adfs.exchangequery.com:443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY sslctlstorename=AdfsTrustedDevices clientcertnegotiation=disable”
$Command | netsh

To renew localhost:

$hostnameport = “localhost:443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY sslctlstorename=AdfsTrustedDevices clientcertnegotiation=disable”
$Command | netsh

To renew Device Registrations:

$hostnameport = “adfs.exchangequery.com:49443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY clientcertnegotiation=enable”
$Command | netsh

The above is required because Changes were made in ADFS on Windows Server 2012 R2 to support Device registration and happens on port 49443.

$hostnameport = “EnterpriseRegistration.exchangequery.com:443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY sslctlstorename=AdfsTrustedDevices clientcertnegotiation=disable”
$Command | netsh

The above is also  required for device registration service.

Hope this helps.

%d bloggers like this: