How to configure SQL server Fail over clustering Instance (FCI).

This blog is part of a SQL HA-DR solution series , In my previous blogs I mentioned how Log Shipping, Mirroring & AlwaysOn Availability Group can be configured, now here you will get step by step procedure for SQL Fail Over Cluster Instance (FCI) high Availability solution. SQL FCI is sometime also known as AlwaysOn FCI and it’s bit different then AlwaysOn AG (Availability group). Always ON FCI need shared storage that is accessible from all the participant node and it provide instance level high availability.  If your primary (or active) server is down then secondary (passive) take responsibilities for all SQL operation.

The details about SQL FCI can be found here.

Continue reading

Advertisements

How to join VMware ESX / VCenter to Active Directory domain and manage using domain account.

Most of organization uses AD infrastructure for authentication and administrator the resources of organization. ESX servers and VSphere can also be joined to AD domain and administrator using domain account.

Here are the steps to do the same.

Confirm the appropriate DNS address & domain name is configured for ESX.  Login to ESX using VSphere client, select Configuration tab, select ‘DNS and routing’. If correct DNS IP is not configured then click on ‘Properties’ and save valid details.

1.jpg

Click on Authentication Services, select Properties to edit current settings, select directory service type as Active Directory. Type domain name then join to domain by giving domain admin credential.

2.jpg

Once ESX joined in the domain, you should be able to see computer account listed in ADUC (Active directory user’s & computer)

3.jpg

Now create a group called ‘VMWare Admin’ & a user ‘VAdmin’, this user will be the member of ‘VMware admin’ group.

4.jpg

In your VSphere client, select Permission tab, right click on empty space select ‘Add Permission’.

5.jpg

Click on ADD, from drop down menu select the domain, you should be able to see AD objects, select VMware admin group.

6.jpg

Select the role you wish to give to map with AD group ‘VMware admin’.

7.jpg

Once it added successfully, you should be able to login in ESX Vsphere client using domain credential. You can check box ‘use windows session credential’ if you wants to login using current windows login.

8.jpg

If you are managing ESX using VMWare VCenter then open VCenter page, go to ‘Administration’ page, select ‘Configuration’, In ‘Identity source’ option select ‘Active Directory (integrated windows authentication)’, Type appropriate domain name and click on OK.

9.jpg

Click on ‘Global Permission’ Tab, click on ‘+’ option then Add AD group ‘VMware admin’.

11.jpg

Select the role you wish to map with to ‘VMware admin’ AD group.

12.jpg

Now you should be able to login to VCenter using windows & AD authentication.

13.jpg

#active-directory, #vmware, #windows

How to create ISCSI shared disk in windows server for ESX VSphere environment.

This my first blog for ESX/VSphere where the requirement was to create ISCSI data store for HA & DRS (High Availability & Distributed Resource Scheduler).  We created storage from windows server instead of having a dedicated storage appliance/VM. The purpose of such configuration is only for learning & testing the HA/DRS activity.

This configuration is designed on windows 2012 R2 with ESX 6.0.

On windows server, install ‘File and ISCSI services’ and make sure ‘ISCSI Target server’ is selected.

1.jpg

Once the role is installed then select ‘File and storage services’ feature and then ‘ISCSI’.

Right click on empty space and select ‘New ISCSI Virtual Disk’ as shown in above example. In my screenshot one of the volume is created and I am creating a new one.

2.jpg

Now select the partition where you wish to place the Virtual Disk.

3.jpg

Give the appropriate Name and Description and confirm the path of SCSI target.

4.jpg

Give the appropriate size & select type of disk as mentioned below. In my example I have selected ‘Dynamically expending’ as I don’t need to worry about space.

5.jpg

You need to create target that will be the list of ESX servers participating in HA/DRS and access VHD using ISCSI protocol.

6.jpg

In my example, I have used DNS name of ESX servers as initiator but it can also be access using IQN, IP address OR MAC address.

16.jpg

In following screenshot, there are two ESX servers are added but you have even more.

8.jpg

Select type of authentication, I am leaving it blank to avoid confusion.

9.jpg

In next page, you can confirm the setting you have selected and the result of different section will be available for you.10.jpg

Now the shared ISCSI disk is ready, you can add this in ESX server using VSphere console. Select ESX server \ Manage \Storage \ Storage Adaptor \Target \ Add Target.

11.jpg

Rescan storage so all newly attache drives are visible.

12.jpg

Now you should be able to see the path of all available SCSI share disk.

13.jpg

SCSI disk will also be available as storage devices.

14.jpg

In your windows server, you would notice the target status as ‘connected’.

15.jpg

#disaster-recovery, #esx, #scsi, #storage, #vmware

How to create windows Network Load Balance (NLB) for Microsoft Exchange.

What is NLB?

NLB configuration is intended for application with relatively small data sets that does not change frequently such as Web, FTP, VPN to provide high availability, high scalability of client request.

For Exchange services, Load balancing helps distribute incoming client connections over a variety of endpoints to ensure that no one endpoint takes on a disproportional share of the load. Load balancing can also provide fail-over redundancy in case one or more endpoints fails. By using load balancing with Exchange Server 2013, you ensure that your users continue to receive Exchange service in case of a computer failure. Load balancing also enables your deployment to handle more traffic than one server can process while offering a single host name for your clients.

Configuration Steps:-

In my test environment, I have two CAS servers on Windows 2012 R2 using Virtualized environment.

EX2013-CAS2.LAB.LOCAL

EX2013-CAS3.LAB.LOCAL

Install ‘Network Load Balance’ feature from ‘Add roles and Features wizard’.

1-add-remove-feature

2-add-remove-feature.jpg

 

NIC card properties on first CAS (EX2013-CAS2.LAB.LOCAL)

2-nic

 

NIC card properties on second CAS (EX2013-CAS3.LAB.LOCAL)

4-NIC.jpg

 

Change NIC ordering. Production NIC should be at top in both the servers.

5-nic

 

Open Network Load Balancing console and click on ‘New Cluster’.

6-nlb

7-nlb

 

Select NLB network interface and click on ‘Next’.

8-NLB.jpg

 

By default, only one default IP will use. If you wish then add multiple IP.

9-nlb

 

Now click ‘Add’ give IP address of cluster where will be used by client request.

10-NLB.jpg11-nlb

 

Give the full internet name of cluster OR CAS endpoint, in my example it will be ‘mail.lab.local’. select ‘Unicast’ cluster operation mode.

12-nlb

 

Let the default option as it is and click on ‘Finish’.

13-nlb

 

You may see spinning hour glass and the configuration will log at the bottom of the page. Double click on line in case of error during configuration to get detailed error message.

14-NLB.jpg

 

Once the wizard completes successfully, server will be shown in green color.

15-NLB.jpg

 

Now add second CAS server into NLB cluster, the step will most likely be the same.

16-NLB.jpg

 

A successful NLB configuration will show both node (CAS) in green status.

17-nlb

 

Now create a host record in DNS server pointing to IP address of NLB cluster.

dns

You will notice now, the MAC address of NLB NIC on both NIC is same.

20-mac

 

On Exchange shell, run following power shell to force client to use ‘mail.lab.local’.

Get-OutlookAnywhere | Set-OutlookAnywhere -InternalHostname mail.lab.local -InternalClientsRequireSsl $false

You can verify if cluster is reachable using ping command on IP address OR hostname.

IPaddress.jpg

 

Another way to confirm by opening OWA page using cluster name.

18nlb

Additional consideration if you are using virtualized environment.

Go to NLB Manager \ Cluster Properties \ Clusters Parameters Tab and write down the Network address for the NLB cluster.

Shut down the NLB cluster VMs one by one (make sure you don’t shutdown both CAS at a time) then configure the network adapters in ESX/VMware/HyperV that you added to the VMs for the NLB cluster to use a static MAC address that matches the NLB network address: 02-BF-0A-0A-0A-28.

 

Please note:-
All content provided on this blog is for informational purposes only. The reason for this blog is to document the things that I have learned reading blogs/books/self about Windows Server. We do not own any responsibility for the events that may happen because of using/applying the code/concepts mentioned in this blog.
If you like above content then feel free to share it with your colleague  and friends. If you wish to improve the content the feel free to comment and contact me directly.

#cas, #exchange, #high-availalablity, #nlb

RAID concepts and configuration in simple words for Windows Admin.

Continue reading

#raid, #sql, #storage, #windows