Active Directory deployment using cloud formation in AWS

Paste following code in notepad and save file with YML extension (eg. ActiveDirectory.yml).

AWSTemplateFormatVersion: 2010-09-09
Parameters:
 ADDomainName:
  Description: "Name the AD domain, eg. Mydomain.LOCAL"
  Type: String
 AdminPassword:
  NoEcho: true
  Description: "Type the password of default 'Admin', hint Pass@me123"
  Type: String
 MyVPC:
  Description: VPC to operate in
  Type: AWS::EC2::VPC::Id
 EditionType:
  Description: "Type of AD"
  Type: String
  Default: Enterprise
  AllowedValues:
    - Standard
    - Enterprise
 PrivateSubnet1ID:
   Description: 'ID of the private subnet 1 in Availability Zone 1 (e.g., subnet-a0246dcd)'
   Type: 'AWS::EC2::Subnet::Id'
 PrivateSubnet2ID:
   Description: 'ID of the private subnet 2 in Availability Zone 2 (e.g., subnet-a0246dcd)'
   Type: 'AWS::EC2::Subnet::Id'

Resources:
  MYDIR:
    Type: 'AWS::DirectoryService::MicrosoftAD'
    Properties:
        Name: !Ref ADDomainName
        Password: !Ref AdminPassword
        Edition: !Ref EditionType
        VpcSettings:
            SubnetIds:
                - !Ref PrivateSubnet1ID
                - !Ref PrivateSubnet2ID
            VpcId: !Ref MyVPC
Outputs:
  DomainName:
    Description: Newly Created Domain name is
    Value: !Ref ADDomainName
    Export:
      Name: DomainName
  DirectoryID:
    Description: ID of AD that will be used in EC2 & SQL servers
    Value: !Ref MYDIR
    Export:
     Name: Directory-ID
  DNS:
    Description: IP address of DNS servers.
    Value: !Join
          - ','
          - !GetAtt MYDIR.DnsIpAddresses
    Export:
     Name: DnsIpAddresses

 

Open AWS console. Go to Cloud formation service then create a New stack, browse and select the YML file created for above step.

SelectFile.jpg

 

Specify Stack name, parameters such as AD name, Admin password, Edition, VPC, Subnet.

Parameter.jpg

 

AWS will prepare resource in background, status will remain Create_in_progress.

Working.jpg

 

After completion, Status will turn to complete, Output tab will show columns as return result, the value in Export Name can be used for any future cloud formation deployment such as Windows EC2, AWS RDS.. ETC.

Final.jpg

Here are the details of Managed AD in AWS.
AWS Managed Microsoft AD
AD DS on AWS

Since this is my blog on AWS cloud formation, I will try improving above code and include few more use cases such as accessing managed AD, creating AWS RDS and joining EC2 in AWS.

IBM Cloud Object Storage (COS) configuration for API access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to IBM Cloud Object Storage (COS).

Open IBM COS URL https://console.bluemix.net/catalog/
‘Sign up’ for free access then ‘Login’. Select Object Storage from Storage container.

SIGNIN.jpg

 

Give appropriate name to Service name then click on Create.

ServiceName.jpg

 

Once service ready then either click on Create bucket OR create your first bucket as highlighted below.

CreateBucket.jpg

 

 

Give appropriate Name to bucket, select appropriate option for Resiliency, location, storage class (standard, vault, cold vault, flex), then click on Create.

bucketproperty.jpg

 

Select Endpoints copy the endpoint name then will be used by API to access.

Endpoint.jpg

 

From Service Credential page, click on ‘New credential’, give appropriate Name to credential, if you already created service id then click Select Service ID, if not then Create New Service ID, If you creating it first time then I would create a new service id (don’t click on add as of now).

ServiceCredential.jpg

 

Give appropriate name to New Service ID Name, in Add lnline… option paste {“HMAC”:true} it will generate Access Key ID & Secret key id for API access.  Now click on Add.

NewCredential.jpg

 

Now click on View Credential then note down access & secret key.

viewCredential.jpg

 

Use Endpoint, access key, secret key to connect with bucket using CyberDuck, CloudBerry, S3Browser and any other equivalent tool.

CloudDuck.jpg

 

You can create bucket, folders, upload & download objects using this tool to confirm the storage configuration.

Upload.jpg

Configuring Dell EMC Elastic Cloud Storage (ECS) for API access

This document is designed for test environment use only. Actual steps for production use might differ. Register ECS Test Drive account  and complete the registration process. Once you have registered for you ECS Test Drive account, log in and click the credentials link at the top left of the page.

Register

 

Once you receive credential. Try one of the following tool to create S3 bucket.

Method 1 Using CyberDuck
Download and install CyberDuck
Click on Open connection, In the dialog window, choose S3 (Amazon Simple Storage Service) from the first drop-down list.

connect.jpg

 

As per ECS credential page,

Change server name = object.ecstestdrive.com
Access Key ID = 131693042649205091@ecstestdrive.emc.com
Password = fbHIum2QY3A5xSr7Vlx63S+USGw3O1ULsHS9jmom

Then click on Connect

openconnection.jpg

 

Click on blank area, click on ‘New Folder’, give name to bucket (eg. ‘storage-ecs-part1’). This name should be lower and should be unique.

bucket create.jpg

 

Method 2 using CloudBerry
Install Cloudberry (freeware) in your storage OR test server.
Select File-New S3 compatible account-S3 compatible

cloudberry-connect.jpg

 

Display name could be anything you wish, supply service point, access key, secret key same as supplied in method 1. The test connection must be successful.

CreateBucket_cloudberry.jpg

 

Create a new bucket.

cloudberry-createbucket.jpg

Following two method can also be used to test bucket access
Another GUI tool called S3Browser
EMC ECS CLI (it require registration with EMC)

 

 

Configuration of Google cloud storage for application access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to Google cloud storage.

Open https://cloud.google.com/  and configure your free account if not done so far.
Open console and create a new Project.

CreateProject.jpg

 

Select the correct project if you have access to multiple projects.

Select Project.jpg

Select storage from ‘Product and services’ menu.

Storage

 

Click on ‘Create Bucket’ and give name, storage class and location.

CreateBucketOption.jpg

CreateBucketOption2.jpg

 

Upload some of files/folders manually.

UploadFiles

 

Click on Settings then Interoperability, note down cloud storage URL, access key and secret key. If secret key isn’t present then click on ‘Create a new Key

AccessKey.jpg

 

Testing bucket access:-

Install Cloudberry (freeware) for Google cloud in your test server.
Connect to Azure blob using access key, secret key. Make sure authentication is ‘Access & secret key’ is selected.

CloudBerrySetup.jpg

 

Test copy (/cut) paste of file (/s) using cloud berry console.

UploadFiles.jpg

 

 

Configuring Rackspace Cloud Files Storage for API access

Following are the steps can be useful if you have any application (service/API) in your environment that need to access (download/upload) files to Rackspace cloud file storage.

Signup to Rackspace cloud
Go to Rackspace control panel then provide root user a/c you configured during signup process.

loginconsole.jpg

 

Create a New User account for API access. Go to User Management from Account tab

UserManagement.jpg

 

Click on Create user, this console will give you list of all users created so far. Once default (root) user will be available by default.

CreateUser.jpg

Give user details such as FirstName, LastName, email, phone….Etc. Contact type must be Technical then select appropriate permission on Rackspace cloud.

UserPermission.jpg

 

 

Once user created successfully, go to the properties of user a/c copy the Rackspace API key.

UserAPIKey.jpg

 

 

From the control panel, select Rackspace cloud from the Product list then select Files from Storage list.

storage-files.jpg

 

Create a New container, select appropriate Region. keep the type as Private

ContainerCreate.jpg

 

Manually upload some of the files using console itself.

UploadFiles.jpg

 

Testing bucket (container) access

Download Rackspace Command Line Interface
Go to the directory where you downloaded the rack binary and run the following command to connect with container.

rack.exe configure

cmd1.jpg

Retrieves a list of containers.

Rack files container list

cmd2.jpg

Lists all of the objects contained in a specified container.

Rack files object list --container StorageAccess

cmd3.jpg

Uploads an object directory into a specified container

Rack files object upload-dir --container StorageAccess --dir \temp\pictures

cmd4.jpg

you can also try checking cloud access using cloud berry backup tool

 

Configuration of Azure Blob storage for application access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to Microsoft Azure blob storage.

Open Azure web portal Configure free tire account if not configured.

Create a blob storage resource.

CreateResource.jpg

 

Give appropriate name for Storage account, remember this should be unique in AWS infrastructure, and select storage kind as ‘Blob storage’, select ‘New’ in ‘Resource group’ section. If you have already created any resource group then select one of old one. Select ‘Pin to dashboard’ so you can access it directly from dashboard.

StorageAccount.jpg

 

From the dash board, click on storage account ‘appazurestorage1’ you just created.

DashBoard.jpg

 

Create a new ‘Container’. It’s kind of folder, in AWS & Google cloud it’s called as a bucket. Keep the access level as private.

Container.jpg

 

Click on ‘Access key’ tab, note down storage account name & key.

AccessKey.jpg

 

The basic configuration of blob storage is ready. You can now upload few files manually using Azure portal by click on bucket you just created.

Upload.jpg

 

Testing bucket access:-

Method 1 using cloud Berry
a. Download and Install Cloudberry (freeware) for Azure in your on premise server.
b. Connect to Azure blob using access key & account name.

CloudBerry.jpg

 

c. Test copy (/cut) paste of file (/s) using cloud berry console.

CloudBerryCopypaste.jpg

 

Method 2 Using MS AzCopy command line tool.
Download and install AzCopy on server. Open command prompt, switch to AzCopy directory which is most likely “C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy“

Download all blobs (files) of a container to local disk.
Sample Syntax

AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer /Dest:C:\myfolder /SourceKey:key /S

Example.

AzCopy /Source:https://appazurestorage1.blob.core.windows.net/myappbucket /Dest:C:\temp\Azure /SourceKey:pQrvBr+rwoI9psWKx73SKcrE8M0JW+ZUQeIY05CJ+PJMGSFMpXV+U9Maygbtiwtc69+aPkabmZna6hxfhuw2NA== /S /sourceType:blob

 

c1.jpg

 

Upload all blobs (files) in a container (bucket)
Syntax

AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /S

Example

AzCopy /Source:C:\temp\Azure\upload /Dest:https://appazurestorage1.blob.core.windows.net/myappbucket /DestKey:pQrvBr+rwoI9psWKx73SKcrE8M0JW+ZUQeIY05CJ+PJMGSFMpXV+U9Maygbtiwtc69+aPkabmZna6hxfhuw2NA== /S

 

c2.jpg

More details about AzCopy tool.

Configuration of AWS S3 (Simple Storage Service) for application access.

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to AWS S3 storage.

Sign in with root a/c credential to AWS Console

loginroot.jpg

Select IAM under ‘Security, identity and compliance’ container

IAM

Add a new user for API or Console access.

CreateUser.jpg

Give appropriate ‘User name’, Select Access Type.

Please note, selection of both access type isn’t recommended for production use due to accessibly issues. This demo require only ‘Programmatic acces. You can use same user account for delegation of AWS storage related stuff managed via AWS console.

UserProperties.jpg

Select ‘Attach existing policies directly’ then search for S3, attach ‘AmazonS3FullAccess’

Permission.jpg

Review the setting and then click on ‘Create user

Note down user name, access Key ID, Secret Access Key and Sign-in URL. You can additionally download CSV file for all these information.

UserDetails.jpg

Select ‘S3’ from ‘Storage’ section.

S3Page.jpg

 

Click on ‘Create Bucket’, give appropriate name and select ‘Region’, the bucket name should be unique in AWS infrastructure.  Then click on ‘Create’. I have skipped remaining criteria such as version, permission and website related stuff for this test. However if you need to have specific settings please refer.

CreateBucket.jpg

You can upload files manually using AWS Console.

Upload.jpg

 

Testing bucket access using on premise application.

Method 1-using CloudBerry

Install Cloudberry (freeware), Connect to AWS S3 bucket

CloudBerryTest.jpg

You can copy (/cut)-paste files from local machine to S3 OR vice versa.

CloudBerry-copypaste.jpg

 

Method 2 Using-Powershell

Install AWS tools Open PowerShell, use following commands to test bucket access.

Set Credential.

Set-AWSCredentials -AccessKey AKIAI3ZDRI4HGSD4NOGQ -SecretKey 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

Get-all buckets

Get-S3Bucket

Ps1.jpg

Show all contents of specified bucket.

Get-S3Object -BucketName appdatatest1 -MaxKey 100 |Format-Table

PS2.jpg

Refer  for more details on PowerShell commands AWS.