Error – loading Microsoft Teams Modern authentication failed here status code caa20004

After enabling Microsoft Teams in a federated setup with ADFS ,we might get this error when on premise users try to login to Microsoft Teams for the first time.

WhatsApp Image 2018-05-30 at 21.05.12

Even on the client logs in the below location we can see the below message-

C:\Users\username\AppData\Roaming\Microsoft\Teams

Wed May 30 2018 06:51:54 GMT+0400 (Arabian Standard Time) <7092> — warning — SSO: ssoerr – (status) Unable to get errCode. Err:Error: ADAL error: 0xCAA10001SSO: ssoerr – (status) Unable to get errorDesc. Err:Error: ADAL error: 0xCAA10001

Wed May 30 2018 06:51:54 GMT+0400 (Arabian Standard Time) <7092> — event — Microsoft_ADAL_api_id: 13, Microsoft_ADAL_correlationId: 2c46e41d-ef75-49ed-b277-cfd61427b273, Microsoft_ADAL_response_rtime: 2, Microsoft_ADAL_api_error_code: caa10001,

There is also Get logs  option that can be opened with the below option  when this issue occurred from the Teams icon as shown below –

Untitled

When the issue occurs we would be able to see the error message regarding  unable to get  ADAL access token in the get logs.

Untitled2

In the below example since its a successful login it shows as success after getting the access token.

Untitled3

There is an option to download MS-Teams Diagnostics logs as well by using the below key combination and here we go we get the Ms Teams Diagnostics logs

Ctrl + Shift + Alt + 1

12

 

while looking through this diagnostics logs it has lot of info like client version, computer name, memory , user ID and we can look only for an information that we are  currently facing, since understanding this logs  would be  really difficult.

Untitled4

Below is an example of getting successful access token.

Untitled5

 

Any Azure AD dependent apps like Microsoft teams they will have an optimized path for the first time login process to login with WS-Trust kerberos authentication endpoints of ADFS.If the above first attempt is not successful then the client will try to perform an interactive login session which is presented as web browser dialog.

But the new office and ADAL clients will first try only WS-Trust 1.3 version of the endpoint for windows integrated authentication which is not enabled by default.

Solution:

Enable WS-Trust 1.3 for Desktop Client SSO on the onprem ADFS server which has a federated setup with Azure AD tenant by running the below command.

Enable-AdfsEndpoint -TargetAddressPath “/adfs/services/trust/13/windowstransport”

We also want to ensure that we have both Forms and Windows Authentication (WIA) enabled in our global authentication policies.

Untitled5

Storage Explorer in Azure portal and its options

The Storage explorer desktop tool is available now in the azure storage accounts section in azure portal.

blob1

 

From here we have options to manage,create Blob Containers, File shares and queues

New Blob Containers can be created deleted managed –

 

blob6

Further we can upload and delete blobs

blob9

we can further drill down and manage properties

10

These are the options variable in the properties

11

Same way the file-share can be created deleted and managed

Also we have an option to upload files, connect to VM and download from here.

blob7

The Storage Queues also can be created and managed

There is option to add message,de queue and clear the queue,.

blob8

Below is the small summary on azure storage accounts blobs, file shares, and queues.

What is Azure Blob Storage?

Azure blob storage is Microsoft objects storage solution.
This storage type is enhanced to store large amount of unstructured data like text or binary.
The items stored on blob storage can be accessed from anywhere in the world via http/https. This can be invoked through azure functions (cli,powershell,etc..,) and libraries are available for multiple languages.

Once created they have a service end point like below.This will be the connection string that can be used in our API’s to access the data in the azure storage account.

blob91.png

There are 3 types of blobs-

Block Blobs – Can be used to store data of types text and binary.It supports data to store up to 4.7 TB. They store data in blocks type and these data can be managed individually.

Append Blobs – They are similar like block blocks except they are enhanced for append operations. This is best suited for recurring tasks operations example like logging data from virtual machines.

Page Blobs – The data are stored and accessed randomly in page blocks and data can be stored up to 8 TB in size.

So the blobs are stored in below order

Storage Account – Containers – Blobs

A storage account can hold multiple containers and a containers in turn can hold unlimited blobs in them.

What is Azure File Storage?
This is a service from azure through which we can create a fileshare in the azure cloud using the standard Server message block (SMB) protocol. This option will be really useful for migrating local fileshares to azure fastly with very minimal cost.

Once the file storage is created we will have the connection string like below

We can use them to connect to either to windows or linux.

blob92.png

The connection string will have the username and password also.

blob93

Since its a SMB it uses port 445, so make sure the port 445 is opened in your local network firewall.We will not be able to connect if port 445 is not allowed from your local network.

What is Azure Storage Queue Service?

This is a service offered by azure where we can store large volumes of messages and they can be accessed from anywhere in the world via http/https. A single message can go up to 64 KB in size. Using this we can provide persistent messaging within and between services. Using this we can store unlimited messages even in same queue.

Once created we will get the end point like below.REST-based operation  can be initiated  for GET/PUT/PEEK operations.

blob94

 

 

 

Enable Azure DDOS Protection and its features

In Azure we can enable the DDOS protection easily in few clicks for our applications running and deployed in Azure Virtual networks.

Using this we can protect the resources in a virtual network and its published end points including public IP address. When it is integrated with application gateway web application firewall, DDOS protection standard can provide full layer 3 to 7 protection.

There are 2 types of service Tier:

Basic-

The basic protection is enabled by default.This provides protection against common network layer attacks through Always on traffic monitoring and real time mitigation.

Basic.png

Standard-

Standard protection is a paid premium service. This has a dedicated monitoring,machine learning and configures DDOS protection to this virtual network. So when enabled applications traffic patterns are enabled and by this it will be able to detect the malicious traffic in a smart way. We can switch between any one of these option in our virtual networks in few clicks.

DDOS9

And then we can click on the standard plan.

DDOS10

This also  provides attack telemetry views through Azure Monitor, enabling alerting when your application is under attack. Integrated Layer 7 application protection can be provided by Application Gateway WAF.

This also provides views of attack in Azure Monitor, Alerting can be enabled when application is under attack. Also Layer 7 application protection can be done by integrating with Azure Web Application Firewall (WAF).

This Standard feature is integrated with Virtual networks and will provide protection for Azure application service end points from DDOS attacks. IT also has alerting, telemetry features which is not present in the basic DDOS protection plan which comes at free of cost.

First we need to create a DDOS protection plan if we need to use the standard feature.

Navigate to Azure Portal – Click on Create DDOS protection Plan

DDOS2

Type Name – Choose Subscription – Select resource Group and choose the location.

DDOS3

Once it is done the deployment will be successful

DDOS5

We have automation option during this deployment

DDOS18

After its deployed when we go to the  DDOS resource we can see the below options in them.

Activity Log – 

This is more of like a Audit log which explains on modifying the resources in the subscription.
There are also few options which tells us about the status of the operation and other properties. But this logs will not have any get operations happening in the resources.

There is an option to filter per resource- resource type and operation.

DDOS19

we have an option to filter them via category , severity and initiated by

DDOS20

Access Control(IAM)-

we can view who has access to the resource and add  new access to the resource and also remove them.
DDOS21

Tags- 

This approach is helpful when we need to organize our resources for billing or management. Tags can be applied to resource groups or resources directly
This retrieves all the resources in our subscription with that tag name and value. Usually helpful in tracking for billing purposes.

Tags1

Tags support only resources deployed through resource manager and does not support resources deployed through classic model.

By default the resource group will not have tags assigned to them. We can assign to to them by running below command.

Tags

Locks – 

Management locks helps us prevent accidental deletion or modification of our Azure resources. we can manage these locks from within the Azure portal.

locks

As an administrator, we might need to lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources.

There are 2 types of lock levels-

Delete(CanNotDelete) –
Authorized users would be able to read and modify a resource, but they will not be able to delete any resources.

ReadOnly-
Users can only read but they will not be able to modify and delete any resources.

locks1

Metrics – 

Allows us to monitor the health, performance, availability and usage of our services.

metrics

Thanks & Regards
Sathish Veerapandian

Configure Enterprise Vault Server Driven PST migration

This article outlines the steps to perform a bulk import of the PST files to large number of mailboxes Archive in Enterprise Vault.

There are few methods to perform the server driven migration in enterprise vault and we will cover one option using the PST task controller services.

Prerequisites:

A csv file with below information needs to be prepared for feeding the data to the Enterprise vault personal store management.

Untitled

Untitled

Where –

UNCPath – path of the pst files. Better to keep them in the Enterprise Vault server which will speed up the migration.
Mailbox – Display Name of the Mailbox of this associated EV archive.
Archive – Display Name of this Archive.
Archive Type – Exchange Mailbox  since its  associated with Exchange mailbox.
Retention Category – Can choose based on requirement
Priority – Can choose based on requirement.
Language – Can choose based on requirement
Directory server – Choose the corresponding directory server.
Site Name – Choose the corresponding site name.

Once the csv file is ready , we need to import the data via personal store management, by choosing multiple and feeding the CSV file.

Untitled

Untitled

Once  imported we can see the summary of successfully imported CSV files.

If its unable to find any associated archives in the csv file it will give an error message only for them and we have an option to export them as csv files.

Untitled

After this import is successful we can see the list of successfully imported files with below information.

Untitled

Now we have provided the EV  with the required data to migrate to this associated archive. Now we need to create PST collector Task ,PST migrator task, PST Migration Policy.

After this we need to create PST holding folder by right clicking on EV site properties and specifying the location. This PST holding folder is a temp location used by EV to copy the actual PST files from the UNC path and perform the import.

This is done because when EV tries to import a pst and if its failed then that pst can no longer be used. After the migration is complete it will automatically delete these files based on the PST migration policy that we have configured.

Untitled

After this configure the PST migration policy –

We need to ignore the client driven settings here , because we are performing a server driven migration by providing the Pst files via csv file.

Untitled

There is option to set the post migration configuration of pst files. Its better not to use this option until the complete migration task is over and we get confirmation from end users.

Untitled

There is a very good option to send email notification post migration.

Untitled

After this we need to create PST collector Task

untitled13

This setting is very important to specify the maximum number of pst to be collected in the holding area. We can set this value based on our requirement.

Untitled

We should schedule the collector task schedule, probably after office hours.

Untitled

Configure the Migrator Task

once this is done we need to configure the pst migrator task

untitled16

we need to configure temporary file location for the pst file to start the migration

Untitled

Also we have the option of number of PSTs to migrate concurrent, which we  can increase based on our requirement. After the CSV is imported we can run the PST collector and migrator task which will start importing the psts to the associated EV accounts.

Also there is a file dashboard which will always help us to  check the current migration status.

Untitled

 

Very important – select the override password for password protected PST files in the personal store management. This will also migrate the password protected PST files. This option looks amazing.

untitled17

Tips :

  1. Make sure the EV service account is used to run the collector and migrator task.
  2. Make sure the EV service account has full access to the PST holding, collecting and Migrating shared drives. If this is not present the import, collection and migration will fail.
  3. Better not to perform any failovers of the node when the large import operation is happening.
  4. There is PST collector and PST migrator logs generated whenever this task runs and is located in the EV provising task location. This will give more information when any issues or road blocks in the migration
  5. If any of the provided PST files are password protected then they will not be migrated, unless we specify the override password protected files in the personal store management.
  6. Make sure you have enough sufficient free disk space in the PST collector and PST Migrator location.

Thanks & Regards
Sathish Veerapandian

Steps to renew the SSL Service Communication certificate in ADFS server

This article explains types of certificates present in ADFS server and  the steps to renew the SSL service communication certificate from ADFS server.

Basically there are 3 types of certificate required for ADFS certificate-

  1. Service Communication certificate – This certificate will be used for the secure communications between the web clients(web clients,federated servers,web application proxy and federated server proxy).The service communication certificate will be issued to the end users when they are redirected to the ADFS page by the application. Its always recommended to have a public SSL for this service communication certificate because it needs to be presented to the end users when redirected to ADFS page.
  2. Signing Certificates- Signing certificates will be used to sign the SAML token.When signed all the  data within the token will be readable in clear-text but when the consumer receives the token it knows that the token has not be tampered from the source. If it finds them to be tampered then it will not accept.Token Signing Can be done only with private portion which only the ADFS server will be having.
    This is the certificate used to sign only the SAML tokens.Token validation will be done with public portion of this certificate which will be available in the ADFS metadata. ADFS certificates will have one default self signed signing certificate which has validity of 1 year and this can be extended. Or we can generate one from internal CA and assign them.
  3. Token Decryption Certificate-  This certificate will be used when the application will be sending the encrypted tokens to the ADFS server.With this it will not sign the token but only encrypt the token. The application will encrypt the token by using the public part of the token decryption certificate. The ADFS server only will be having the private part of the key which it will be using to decrypt the token. ADFS certificates will have one default self signed token decryption certificate which has validity of 1 year and this can be extended. Or we can generate one from internal CA and assign them.

We can see the public certificate from the published ADFS  metadata.

Access the metadata url in browser. look for X509 that has values that ends with “=” sign.It’s base64 encoded so it will normally end with an “=” sign.

Testtr

We can see multiple x509 values. The public certificate is  base64 encoded so it will normally end with an “=” sign at the end like example below.

Testtr1

 

Once after we save them in .crt format we can see the public certificate which will be present in the ADFS metadata URL. So by using this the application will encrypt the token and send them to ADFS server. The ADFS server in turn can decry-pt this by using this certificate private key. This certificate private key will be present only with the ADFS server.Just in case if this private key is compromised then anybody can impersonate as your ADFS server

Testtr1

We can more or less verify the encryption on our own to get a better understanding of how it works.
When we do a SAML-trace in Firefox developer edition against a Relying Party we have with ADFS when we check the SAML-token, we will see that the saml:p response to the integrated service provider will be encrypted.

Below steps can be followed to renew the communication certificate

  1. Generate CSR from ADFS server. This can be done via IIS.
  2. Get the certificate issued from the public CA Portal.
  3. Once certificate is issued, add new certificate in Certificate store.
  4. Verify Private Key on the certificate. Make sure new certificate has the private key.
  5. Assign Permissions to the Private Key for ADFS service account. Right click on the certificate, click manage private keys, add ADFS service account and assign permissions as shown in below screenshot.

Untitled

6. From ADFS console select “Set Service Communication Certificate”

7.Select new certificate from prompted list of certificates.

To renew the SSL certificate for ADFS claims providers federation metadata URL can follow the previous article – https://exchangequery.com/2018/01/25/renew-ssl-certificate-for-adfs-url/

 

Create Cosmos DB , failover options,data replication options from azure subscription

This article outlines the steps to create Cosmos DB from the azure subscription.

  1. login to azure portal – Click on Azure Cosmos DB – Create Cosmos DB
  2. Type the document ID – keep this in mind this document ID is the URL we will be using  as connection string in the application.
  3. Select the preferred API according to your requirement.
  4. Choose the azure subscription, select the resource group
  5. Choose the primary location where the data needs to be replicated.There is an option to enable geo redundancy which can be done later as well.

Picture9

To Enable Geo-Redundancy-

Click on – Enable Geo Redundancy – and choose the  preferred region.

Picture10

Replicate data globally in few clicks –

Picture11

Failover options –

There are 2 failover options Manual and automatic.

Picture12
Manual can be triggered any time – we just need to select the disclaimer and initiate failover.

Picture13

Add new regions any time and replicate your data in few minutes-

Picture14

Failover options – Automatic

we need to go and enable the automatic failover as below

Picture15

Also there is an option to change the priorities of the failover in few clicks.Good part is can be done any time and we do not need to change them on the code.

Picture16

Consistency levels:

Can be modified and altered any time. The default consistency type is session as below.

Picture17

Network Security for the Database:

We have an option to access the database only from few subnets. This gives a complete security to the document. A VAPT can be initiated after setting up this security which eases the database admin job on considerations on data security.

Picture18

Endpoint and Keys to integrate with your code:

We need to use the URI and the primary key to integrate with the code. This can be seen by clicking on the keys section on the left side.

Picture19.png

Summary:

Now a Cosmos Database is created  –  now create new Collection-  create documents –  Then they are stored in JSON rows. Try to have most of the  documents under one collection, because the pay model is per collection.

Create collection:

Click on Add collection

Picture21.png

Create new Database ID – then Collection ID

Remember the collections in Cosmos DB are created in the below order.

Picture23

Now we need to choose the throughput and the storage capacity. They will be charged according to the selection. Also there is an option to choose unique keys which adds more data integrity.

Picture22

Example of a new document

Picture25

Better to define a document ID and collection ID.

Picture24

Once the above is done, we can connect via preferred available API  to your document and the developer do not need to worry about data schema , indexing, security.

More sample codes in GitHub:

https://github.com/Azure-Samples/azure-cosmos-db-documentdb-nodejs-getting-started

Example below:

Before you can run this sample, you must have the following prerequisites:

◦An active Azure Cosmos DB account.
◦Node.js version v0.10.29 or higher.
◦Git.

1) Clone the repository.

Picture26
2) Change Directories.
3) Substitute Endpoint with your primary key and endpoint.

Picture27
4) Run npm install in a terminal to install required npm modules
5) Run node app.js in a terminal to start your start your node application

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & Services.

Microsoft Cosmos DB features,options and summary

This article gives an introduction on Microsoft Cosmos DB, features available in them and options to integrate with application.

Introduction:

CosmosDB is the next Generation of Azure DB,its a enhanced version of document db.
Document DB customers, with their data, are automatically Azure Cosmos DB customers.
The transition is seamless and you now have access to all capabilities offered by Azure Cosmos DB.

Cosmos DB is a planet scale database.It is a good choice for any server less application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. They are more transparent to your application and the config does not need to change.

How It was derived:

Microsoft Cosmos DB isn’t entirely new: It grew out of a Microsoft development initiative called Project Florence that began in 2010. Project Florence is a speculative glimpse into our Future where both our Natural and Digital worlds could co-exist in harmony through enhanced communication.

Picture1

  • It was first commercialized in 2015 with the release of a NoSQL database called Azure DocumentDB
  • Cosmos DB was introduced in 2017.
  • Cosmos DB expands on it by adding multi-model support, global distribution capabilities and relational-like guarantees for latency, throughput, consistency and availability.

Why Cosmos DB?

  • It has no Data Scheme and schema-free. indexes all the data without requiring you to deal with schema and index management.
  • It’s also multi-model, natively supporting document, key-value, graph, and column-family data models.
  • Industry first Globally distributed, horizontally scalable, multi-model database service. Azure Cosmos DB guarantees single-digit-millisecond latencies at the 99th percentile anywhere in the world, offers multiple well-defined consistency models to fine-tune performance, and guarantees high availability.
  • No need to worry about instances, servers, CPU , Memory. Just select the throughput , required storage and create collections. CosmosDB works based only on throughputs. It has integrations with Azure functions. Serverless event driven solution.
  • API’s and Access Methods- Document DB API,Graph API (Gremlin),MongoDB API,RESTful HTTP API & Table API. This gives more flexibility to the developer.
  • They are elastic Globally scalable and with HA , Automatically indexes all your data.
  • 5 Consistency concepts – Bounded Staleness, Consistent Prefix,Session Consistency,Eventual Consistency,Immediate Consistency.  Application owner has now  more options to choose between consistency and performance.

Summary on Cosmos DB:

Picture2

Example without Cosmos DB:

  • Data Geo replication might be a challenge for the developer.
  • Users  from remote locations might experience latency and inconsistency in their data’s.
  • Providing an automatic failover is a real challenge.

Picture3.png

Example with Cosmos DB:

  • Data Can be Geo-Distributed in few Clicks.
  • Developer do not need to worry about the data replication.
  • Strong consistency can be given to the end users across geo-distributed location.
  • Web-Tier application can be  changed anytime between primary and secondary in few clicks.
  • Failover can be initiated any  time manually and automatic failover is present.

Picture5

Data Replication Methods:

  • Replicate Data with a single click – we can add/remove them by a single click.
  • Failover can be customized any time in few clicks(automatic/manual).
  • Application does not need to change.
  • Easily move web tier and it will automatically find the nearest DB.
  • Write/Read Regions can be modified any time.
  • New Regions can be added/removed any time.
  • Can be accessed with different API’s.

Existing data can be migrated:

  • For Example if we already have a mongo app we can just import and move them over.
  • Just copy the mongo data into the cosmos and replacing the URL in the code.
  • We can use Data migration Tool for the migration.

5 Consistency Types:

There are 5 consistency types where the developer can choose according to  the requirement.

  • Synchronous – eventual consistent End users get the best performance.(but data will not be consistent)
  • Strong – will only commit the database to the write/read regions after the copy is successful.(consistent data across all regions)
  • Bounded –  Option to set Bounded staleness to 2 hour. If it is set to 0 then it becomes strong consistency.(We can select few interval up to which the consistency can be strong till the replication is completed to read regions)
  • Session – It is synchronous but not consistent for all users. Clients who commits the data can see the fresh data.
  • Consistent Prefix – Copy of the data order will be maintained and they will see the uniform data.

Based on these 5 consistency concepts, the application developer can  decide to choose either to give the best performance or a consistent data to the end users.

Example of Eventual Replication:

The data is not consistent for the read region and users in write region alone can see the fresh data.

Picture6

Replicate Data with a single click:

Provides more regions to replicate just in few clicks which are more than Amazon and Google combined.

Picture7.png

Available API Methods:

Picture8

Recommendations from Microsoft:

  • According to Microsoft, Cosmos DB can be used for “any Web, mobile, gaming and IoT applications that need to handle massive amounts of reads and writes on a global scale with low response times.
  • ” However, Cosmos DB’s best use cases might be those that leverage event-driven Azure functions, which enable application code to be executed in a serverless environment.
  • Its not a relational database.Its not a SQL server not good at random joins . Does not matter what value of data it is as long as you don’t do joins.
  • Minimum is 400RU per collection, which would be around 25 USD / month. Each Collections are charged individually, even if they contain small amounts of data. Need to change your code to put all of documents into one collection.
  • It’s a “NoSQL” platform with SQL on top of it for SQL operations better not to do multiple joins.

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & services

%d bloggers like this: