Windows EC2 deployment using cloud formation

Following YML script can be use to perform Windows EC2 deployment using cloud formation.

Parameters:
  EnvironmentType:
   Description: Environment Type
   Type: String
   AllowedValues: [development, production]
   ConstraintDescription: must be development or production

  KeyName:
   Description: Name of an existing EC2 KeyPair to RDP this windows instance.
   Type: AWS::EC2::KeyPair::KeyName
   ConstraintDescription: must be the name of an existing EC2 KeyPair.

Mappings:
 EnvironmentToInstanceType:
  development:
   instanceType: t2.micro
  production:
   instanceType: t2.small

Resources:
 ServerSecurityGroup:
  Type: AWS::EC2::SecurityGroup
  Properties:
   GroupDescription: Allow RDP & HTTP access from all IP ADDresses
   SecurityGroupIngress:
    -   IpProtocol: tcp
        FromPort: 80
        ToPort: 80
        CidrIp: 0.0.0.0/0
    -   IpProtocol: tcp
        FromPort: 3289
        ToPort: 3289
        CidrIp: 0.0.0.0/0

 WindowsInstance:
  Type: AWS::EC2::Instance
  Properties:
   InstanceType: !FindInMap [EnvironmentToInstanceType, !Ref 'EnvironmentType', instanceType]
   #Choose correct ImageID, ami-da003ebf belogs to base windows 2012 R2 image.
   ImageId: ami-da003ebf
   KeyName: !Ref KeyName
   SecurityGroups:
    - !Ref ServerSecurityGroup
    

 

Here are the steps.

  1. Save above code in WinEC2.YML file.
  2. Open AWS management console, In Cloud formation section, select New Template, select Upload a template to Amazon S3. Select WinEC2.YML file then follow the wizard with all default options. You will be prompted for Environment Type (Production/Development) & Key Pair.EC2.jpg
  3. Once deployment successfully completes, you would see events like below screenshot.

EC2_Success.jpg

If you wish to join newly created windows EC2 to Active directory then use following reference for YML code. https://aws.amazon.com/blogs/security/how-to-configure-your-ec2-instances-to-automatically-join-a-microsoft-active-directory-domain/

#choose

MS SQL deployment using cloud formation in AWS.

Here is the code snippets for MS SQL deployment using YML code in AWS. If you wish to make it AD integrated then review the details given in comment section.

AWSTemplateFormatVersion: '2010-09-09'
Description: Creates an empty SQL Server RDS database as an example for automated deployments.
Parameters:
  SqlServerInstanceName:
    NoEcho: 'false'
    Description: RDS SQL Server Instance Name
    Type: String
    Default: MyAppInstance
    MinLength: '1'
    MaxLength: '63'
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9]*"
  DatabaseUsername:
    AllowedPattern: "[a-zA-Z0-9]+"
    ConstraintDescription: DBAdmin
    Description: Database Admin Account User Name
    MaxLength: '16'
    MinLength: '1'
    Type: String
    Default: 'DBAdmin'
  DatabasePassword:
    AllowedPattern: "^(?=.*[0-9])(?=.*[a-zA-Z])([a-zA-Z0-9]+)"
    ConstraintDescription: Must contain only alphanumeric characters with at least one capital letter and one number.
    Description: The database admin account password.
    MaxLength: '41'
    MinLength: '6'
    NoEcho: 'true'
    Type: String
    Default: Admin123
  DBEngine:
    Description: Select Database Engine
    Type: String
    AllowedValues: [Express, Enterprise]
  #Following paramter can be placed if SQL needs to be AD integrated.
  #DomainID:
  # Description: Enter the Domain ID
  # Type: String

Mappings:
 SQLTOEngineType:
  Express:
   Engine: sqlserver-ex
  Enterprise:
   Engine: sqlserver-ee

Resources:
  SQLDatabase:
    Type: AWS::RDS::DBInstance
    Properties:
      DBInstanceIdentifier:
        Ref: SqlServerInstanceName
      LicenseModel: license-included
      Engine: !FindInMap [SQLTOEngineType, !Ref 'DBEngine', Engine]
      EngineVersion: 13.00.4466.4.v1
      DBInstanceClass: db.t2.micro
      AllocatedStorage: '20'
      MasterUsername:
        Ref: DatabaseUsername
      MasterUserPassword:
        Ref: DatabasePassword
      PubliclyAccessible: 'true'
      BackupRetentionPeriod: '1'
      #If SQL RDS needs to Active Directory Integrated then uncomment following parameter.
      #Domain: !ImportValue Directory-ID
      #OR
      #!Ref DomainID
      #IAM role is mandate for AD integration
      #DomainIAMRoleName: 'rds-directoryservice-access-role'
Outputs:
   SQLDatabaseEndpoint:
     Description: Database endpoint
     Value: !Sub "${SQLDatabase.Endpoint.Address}:${SQLDatabase.Endpoint.Port}"
  1. Save above code in SQLRDS.YML file.
  2. Open AWS management console, In Cloud formation section, select New Template, select Upload a template to Amazon S3. Select SQLRDS.YML file then follow the wizard with all default options.
  3. Once deployment successfully completes, you would see events like below screenshot.

sqlrds.jpg

#domain, #domainiamrolename, #domainid, #following, #iam, #if, #or

Active Directory deployment using cloud formation in AWS

Paste following code in notepad and save file with YML extension (eg. ActiveDirectory.yml).

AWSTemplateFormatVersion: 2010-09-09
Parameters:
 ADDomainName:
  Description: "Name the AD domain, eg. Mydomain.LOCAL"
  Type: String
 AdminPassword:
  NoEcho: true
  Description: "Type the password of default 'Admin', hint Pass@me123"
  Type: String
 MyVPC:
  Description: VPC to operate in
  Type: AWS::EC2::VPC::Id
 EditionType:
  Description: "Type of AD"
  Type: String
  Default: Enterprise
  AllowedValues:
    - Standard
    - Enterprise
 PrivateSubnet1ID:
   Description: 'ID of the private subnet 1 in Availability Zone 1 (e.g., subnet-a0246dcd)'
   Type: 'AWS::EC2::Subnet::Id'
 PrivateSubnet2ID:
   Description: 'ID of the private subnet 2 in Availability Zone 2 (e.g., subnet-a0246dcd)'
   Type: 'AWS::EC2::Subnet::Id'

Resources:
  MYDIR:
    Type: 'AWS::DirectoryService::MicrosoftAD'
    Properties:
        Name: !Ref ADDomainName
        Password: !Ref AdminPassword
        Edition: !Ref EditionType
        VpcSettings:
            SubnetIds:
                - !Ref PrivateSubnet1ID
                - !Ref PrivateSubnet2ID
            VpcId: !Ref MyVPC
Outputs:
  DomainName:
    Description: Newly Created Domain name is
    Value: !Ref ADDomainName
    Export:
      Name: DomainName
  DirectoryID:
    Description: ID of AD that will be used in EC2 & SQL servers
    Value: !Ref MYDIR
    Export:
     Name: Directory-ID
  DNS:
    Description: IP address of DNS servers.
    Value: !Join
          - ','
          - !GetAtt MYDIR.DnsIpAddresses
    Export:
     Name: DnsIpAddresses

 

Open AWS console. Go to Cloud formation service then create a New stack, browse and select the YML file created for above step.

SelectFile.jpg

 

Specify Stack name, parameters such as AD name, Admin password, Edition, VPC, Subnet.

Parameter.jpg

 

AWS will prepare resource in background, status will remain Create_in_progress.

Working.jpg

 

After completion, Status will turn to complete, Output tab will show columns as return result, the value in Export Name can be used for any future cloud formation deployment such as Windows EC2, AWS RDS.. ETC.

Final.jpg

Here are the details of Managed AD in AWS.
AWS Managed Microsoft AD
AD DS on AWS

Since this is my blog on AWS cloud formation, I will try improving above code and include few more use cases such as accessing managed AD, creating AWS RDS and joining EC2 in AWS.

IBM Cloud Object Storage (COS) configuration for API access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to IBM Cloud Object Storage (COS).

Open IBM COS URL https://console.bluemix.net/catalog/
‘Sign up’ for free access then ‘Login’. Select Object Storage from Storage container.

SIGNIN.jpg

 

Give appropriate name to Service name then click on Create.

ServiceName.jpg

 

Once service ready then either click on Create bucket OR create your first bucket as highlighted below.

CreateBucket.jpg

 

 

Give appropriate Name to bucket, select appropriate option for Resiliency, location, storage class (standard, vault, cold vault, flex), then click on Create.

bucketproperty.jpg

 

Select Endpoints copy the endpoint name then will be used by API to access.

Endpoint.jpg

 

From Service Credential page, click on ‘New credential’, give appropriate Name to credential, if you already created service id then click Select Service ID, if not then Create New Service ID, If you creating it first time then I would create a new service id (don’t click on add as of now).

ServiceCredential.jpg

 

Give appropriate name to New Service ID Name, in Add lnline… option paste {“HMAC”:true} it will generate Access Key ID & Secret key id for API access.  Now click on Add.

NewCredential.jpg

 

Now click on View Credential then note down access & secret key.

viewCredential.jpg

 

Use Endpoint, access key, secret key to connect with bucket using CyberDuck, CloudBerry, S3Browser and any other equivalent tool.

CloudDuck.jpg

 

You can create bucket, folders, upload & download objects using this tool to confirm the storage configuration.

Upload.jpg

Configuring Dell EMC Elastic Cloud Storage (ECS) for API access

This document is designed for test environment use only. Actual steps for production use might differ. Register ECS Test Drive account  and complete the registration process. Once you have registered for you ECS Test Drive account, log in and click the credentials link at the top left of the page.

Register

 

Once you receive credential. Try one of the following tool to create S3 bucket.

Method 1 Using CyberDuck
Download and install CyberDuck
Click on Open connection, In the dialog window, choose S3 (Amazon Simple Storage Service) from the first drop-down list.

connect.jpg

 

As per ECS credential page,

Change server name = object.ecstestdrive.com
Access Key ID = 131693042649205091@ecstestdrive.emc.com
Password = fbHIum2QY3A5xSr7Vlx63S+USGw3O1ULsHS9jmom

Then click on Connect

openconnection.jpg

 

Click on blank area, click on ‘New Folder’, give name to bucket (eg. ‘storage-ecs-part1’). This name should be lower and should be unique.

bucket create.jpg

 

Method 2 using CloudBerry
Install Cloudberry (freeware) in your storage OR test server.
Select File-New S3 compatible account-S3 compatible

cloudberry-connect.jpg

 

Display name could be anything you wish, supply service point, access key, secret key same as supplied in method 1. The test connection must be successful.

CreateBucket_cloudberry.jpg

 

Create a new bucket.

cloudberry-createbucket.jpg

Following two method can also be used to test bucket access
Another GUI tool called S3Browser
EMC ECS CLI (it require registration with EMC)

 

 

Configuration of Google cloud storage for application access

Following are the steps can be useful if you have any application (service) in your on premise that need to access (download/upload) files to Google cloud storage.

Open https://cloud.google.com/  and configure your free account if not done so far.
Open console and create a new Project.

CreateProject.jpg

 

Select the correct project if you have access to multiple projects.

Select Project.jpg

Select storage from ‘Product and services’ menu.

Storage

 

Click on ‘Create Bucket’ and give name, storage class and location.

CreateBucketOption.jpg

CreateBucketOption2.jpg

 

Upload some of files/folders manually.

UploadFiles

 

Click on Settings then Interoperability, note down cloud storage URL, access key and secret key. If secret key isn’t present then click on ‘Create a new Key

AccessKey.jpg

 

Testing bucket access:-

Install Cloudberry (freeware) for Google cloud in your test server.
Connect to Azure blob using access key, secret key. Make sure authentication is ‘Access & secret key’ is selected.

CloudBerrySetup.jpg

 

Test copy (/cut) paste of file (/s) using cloud berry console.

UploadFiles.jpg

 

 

Configuring Rackspace Cloud Files Storage for API access

Following are the steps can be useful if you have any application (service/API) in your environment that need to access (download/upload) files to Rackspace cloud file storage.

Signup to Rackspace cloud
Go to Rackspace control panel then provide root user a/c you configured during signup process.

loginconsole.jpg

 

Create a New User account for API access. Go to User Management from Account tab

UserManagement.jpg

 

Click on Create user, this console will give you list of all users created so far. Once default (root) user will be available by default.

CreateUser.jpg

Give user details such as FirstName, LastName, email, phone….Etc. Contact type must be Technical then select appropriate permission on Rackspace cloud.

UserPermission.jpg

 

 

Once user created successfully, go to the properties of user a/c copy the Rackspace API key.

UserAPIKey.jpg

 

 

From the control panel, select Rackspace cloud from the Product list then select Files from Storage list.

storage-files.jpg

 

Create a New container, select appropriate Region. keep the type as Private

ContainerCreate.jpg

 

Manually upload some of the files using console itself.

UploadFiles.jpg

 

Testing bucket (container) access

Download Rackspace Command Line Interface
Go to the directory where you downloaded the rack binary and run the following command to connect with container.

rack.exe configure

cmd1.jpg

Retrieves a list of containers.

Rack files container list

cmd2.jpg

Lists all of the objects contained in a specified container.

Rack files object list --container StorageAccess

cmd3.jpg

Uploads an object directory into a specified container

Rack files object upload-dir --container StorageAccess --dir \temp\pictures

cmd4.jpg

you can also try checking cloud access using cloud berry backup tool