Quantcast
Channel: Secure Infrastructure Blog
Viewing all 196 articles
Browse latest View live

Small tip: Where is the XSD Definition Tab in Visual Studio 2012 with Host Integration Server 2013

$
0
0

Today I faced an issue that took me some time to figure out with Host Integration Server 2013. Usually when you develop transaction Integration (or application integration) with BizTalk server you require XSD schemas to be able to perform mappings and send messages using a send port. With Host Integration Server and up to 2010 version we did that by writing an application definition DLL within Visual Studio 2010 and then while you have the DLL description open in the designer you would click on the “XSD Definition” tab within your Visual Studio window as below:

image

Then you copy all contents of this tab into a newly created XSD schema in a BizTalk project.

Now the problem I faced was that this tab is missing in HIS 2013 :( So how can we do this. At first I thought that maybe this is an installation issue as I had in that environment HIS 2010 and VS 2010. But it was not. It turned out that now you simply save the DLL and VS 2012 automatically generates for you the XSD (along with many other things such as WCF and ASMX services) inside a bin folder. So Vola use that XSD.

Happy HISing ;)


Host Integration Server 2013 TI Assembly GAC deployment Walkthrough

$
0
0

Introduction

Host Integration Server (HIS) is used to integrate with Legacy Host technology. One of the Host integration scenarios is to call Mainframe programs or transactions using a component in HIS called Transaction Integrator (TI). The way to use TI is to create what is called a TI assembly and then use this to call the Host systems from the Windows System. The TI assembly can be either used in a folder or deployed to the Global Assembly Cache (GAC). I wanted to deploy the assembly to the GAC to make sure it is available from anywhere in the integration solution and because I had strong named assemblies in the solution that required to reference this assembly. This blog post will provide the challenge and problems faced while trying to do this and how these were resolved to finally deploy the TI assembly to the GAC and use it from there in a BizTalk solution.

Discussion

Host Integration Server (HIS) is used to integrate with Legacy Host technology. One of the Host integration scenarios is to call Mainframe programs or transactions using a component in HIS called Transaction Integrator (TI). TI technology has been around for a while (it was used to be called COMTI) but it went through many changes and updates. In HIS 2013 the TI component undergone major changes. There are two scenarios to use the TI, either Host Initiated Processing (HIP) where the call is initiated from the Host server to the Windows server, in this scenario the Windows server acts as the backend and the Host is acting as the channel or initiating system. The other scenario is Windows Initiated Processing (WIP) where the call is initiated from the Windows platform and it is routed to the Host system. The diagram below illustrates this scenario.
clip_image002

Figure 1: Windows Initiated Processing Scenario

To be able to implement a WIP scenario you need to implement what is called a TI assembly. This assembly actually governs the interface and mapping of parameters between the Windows system and the Host system. The way you create this assembly (as this is a special assembly type) is using the TI project template that ships with the HIS 2013. I am not going to describe how to create the TI assembly as you can find many references on how to do this in the HIS 2013 SDK here and on MSDN here.
Once you have created the TI assembly you will have the following view in VS 2012.
clip_image004

Figure 2: TI Assembly in VS 2012

Now to be able to deploy an assembly in the GAC you need to have that assembly signed with a key and then would have a strong name. If you click on the Library node (root node in the above figure) and check the properties you will see that it has the following properties:
clip_image006

Figure 3: TI Library Properties

The property I am interested in is the KeyFile property which is according to the documentation if you provide a Key file in this property then the generated assembly would be signed with this key and would have a strong name (sweet). Unfortunately for some reason this does not work as expected and I could not get it to work and the assembly generated is always not signed.
I have investigated many other options to sign the assembly after being compiled and this is doable but I wanted something more streamed and easier to be done if I needed to re-generate the assembly.
Once I signed the assembly (as I will show you in the next section) and GACed it. When I create a BizTalk port using the BAHA adapter and try to configure it with the GACed assembly I found that the dialog to select the assembly from the GAC is always empty!  As per the below figure.
clip_image007

Figure 4: Select Assembly from GAC form

The way to resolve also this issue and finally to configure the GACed assembly will be also presented in the following section.

Solution & Walkthrough

So let’s get to the way to get this working. The steps are as follows:

Part 1: GAC the TI Assembly

1-     Create the TI Assembly as you need and configure the calling model of the assembly to be “DirectCall, WCF, WS, BAHA”
clip_image009

2-     Now in the same properties make sure you have set the Debug property to “True”
clip_image011

3-     Now save the TI assembly and you will find the following are the generated artifacts in the output folder:
clip_image013

4-     Now take notice of the generated CS file marked above. Now create a new class library project and name it as you see fit. In that Class library project make sure that the .NET version is set to 4.0 at least (as HIS 2013 is upgraded to .Net 4.0) and then add the generated CS file to that project ( you might even add it as a link to make sure if the CS file is re-generated it is automatically updated in the class library project)
clip_image015

5-     Now make sure you add the following line to the AssemblyInfo.cs
[assembly: System.CLSCompliant(true)]

6-     Now go to the properties of the class library and make this library a signed library as you do with any C# library
clip_image016

7-     Add references to the following assemblies:
clip_image018
C:\Program Files\Microsoft Host Integration Server 2013\system\Microsoft.HostIntegration.TI.ClientContext.dll
C:\Program Files\Microsoft Host Integration Server 2013\system\Microsoft.HostIntegration.TI.TBGen.dll
C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0\System.ServiceModel.dll
C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0\System.Web.Services.dll

8-     Now build the assembly it should build successfully and then add it to the GAC using Gacutil.exe as you would do with any other assembly. The TI assembly is now GACed and ready to be used.

Part 2: Use the GACed TI assembly from BizTalk

1-     Since the TI assembly is a .Net 4.0 assembly it will be GACed in the .Net 4.0 GAC and hence will require the .Net 4.0 Gacutil to be handled. Which comes installed with VS 2012.

2-     Now for some reason the HIS HostApps adapter is not linked correctly to the VS2012 installation folder and hence I had to change things a little to make sure it is able to see the TI assembly in the GAC.

3-     Open the Regedit application and browse to the path “HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework”

4-     Add a new string value with the name “sdkInstallRootv2.0” and set the value of this string to “C:\Gacutil4”

5-     Now open the .net 4.0 tools folder “C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools” and copy all the contents of this folder to the folder “C:\Gacutil4\Bin” (just create this folder).

6-     Now open the BizTalk administration console.

7-     Create a new send port in BizTalk

8-     Select the HostApps as the adapter and then click configure and configure the connection string
clip_image020

9-     Now click add and then add assemblies (GAC) option
clip_image021

10-  Now you will see all assemblies installed in the .Net 4.0 GAC and you can select your TI assembly that you installed in Part 1 above
clip_image022

11-  Now complete the configuration as usual and click ok to configure the send port
clip_image024

12-  Now if you check the connection string created and compaire the TI GACed assembly with a normal one you will find this as below:

<?xmlversion="1.0"encoding="utf-8"?>
<mappings>
  <mapping>
    <assembly><![CDATA[C:\Program Files\Microsoft Host Integration Server 2013 SDK v1.0\ApplicationIntegration\WindowsInitiated\InstallationVerification\TiHostDefinitions\NetClnt1\bin\NetClnt1.DLL]]></assembly>
    <connectionString><![CDATA[CodePage=37;Name=TrmLink;TimeOut=0;SecurityFromClientContext=False;IPAddress=127.0.0.1;TCPPorts=7510;ConcurrentServerTransactionId=MSCS]]></connectionString>
  </mapping>
  <mapping>
    <assembly><![CDATA[MCS.Test.TI, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2b679cb5291d11c2, processorArchitecture=MSIL]]></assembly>
    <connectionString><![CDATA[CodePage=37;Name=TrmLink;TimeOut=0;SecurityFromClientContext=False;IPAddress=127.0.0.1;TCPPorts=7510;ConcurrentServerTransactionId=MSCS]]></connectionString>
  </mapping>
</mappings>

13-  Now you can use this send port normally.

Conclusion

The HIS 2013 TI assembly can be installed in the GAC to simplify your deployment configuration. The walkthrough provided above can be used on the development machine to be able to configure the solution and build it but on the production environment no changes are required at all.

Using Host Integration Server COBOL Import Wizard

$
0
0

Introduction

Host Integration Server allows application integration with the Mainframe system using the Transaction Integrator (TI) component. The method to implement this integration involves building what is called a TI assembly. TI Assembly is actually built using the definition of the interface between the Mainframe program and the definition of the input and output COM areas. This involves in many cases reading large COBOL files and manually adding them to the TI library. The HIS development tools have added the import utility to simplify importing from COBOL copy book files but as we will see in this post this still have many challenges and I will provide in this post a systematic approach to facilitate importing a COBOL copy book file.

Discussion

COBOL copy books are the way Mainframe programs define the inputs and outputs. You can think of these as the signature of C# methods. These are mainly text files with specific structure. You can find more details about the definition of the structure of the COBOL copy book files here. A COBOL copy book file would look something like the below:
clip_image002
Now usually you would receive two COM area definitions like the above, one for the input COM area and another for the output (if they are different) from the Mainframe team. So the challenge here is how to use the HIS import utility to import these into one method in the TI assembly. Usually when you try to import the import wizard would crash and would not give you enough description of where was the problem so you would end up adding these things manually and that would be ok for smaller programs (like the above) but for more complex programs where you have more than 20-50 fields in input and outputs that would be a very tedious task. The section below would provide a simple approach and walkthrough on how to import these files and use the import wizard successfully every time.

Solution & Walkthrough

The HIS import wizard is very sensitive to the spaces and structure of the COBOL copy book file. So for example a following misalignment would fail the import.
clip_image004
So the first thing you need to do when dealing with COBOL copy books is to align them perfectly. To do this I recommend using a tool such as Notepad ++ as it understands this type of file and it actually shows you very helpful vertical guidelines.
clip_image005
So this simplify spotting misalignments.
The second thing you need to do is to merge the inputs and outputs COM area definitions in one COBOL copy book file while aligning the outputs as if it is part of the input and I will show you later how these would be separated. Just give it a name with the same program name while adding any suffix such as “-OUT” to it. Please also note that every line has to be terminated with a dot.
clip_image006
Now that you have created and saved the COBOL copy book file open Visual Studio and create a new client TI library in a TI project. Now import it as follows:

1-    
clip_image007

2-    
clip_image009

3-    
clip_image010

4-    
clip_image012

5-    
clip_image013

6-    
clip_image014

7-    
clip_image016

8-     Now the method is created as follows:
clip_image018

 

Installing MIM CM 2016 for Multiple Forests–Part 1

$
0
0

Howdy Folks. MIM 2016 went GA some time ago and one of the new features of MIM for the Certificate Management component was the support for cross forest issuance of certificates\smart cards. Though most enterprises comprise of a single forest, in the time of mergers and acquisitions many enterprises would consist of multiple forests in an account\resource forest configuration with a trust or even multiple forests in an enterprise.

Today I will walk you through the requirements and additional configurations to enable cross forest issuance of certificates\smart cards between two forests in the lab environment. This blog assumes that an environment consisting of two forests with a two-way trust is already setup. The resource forest has a certificate authority, SQL server and MIM CM server.

Servers : Certificate authority, SQL and MIM CM server in the resource forest.

Step 1 – Schema extension in both forests.

Execute below file on the schema master of the resource forest .

C:\MIM\Certificate Management\x64\Schema\resourceForestModifySchema.vbs

Execute below file on the schema master of the account forest.

C:\MIM\Certificate Management\x64\Schema\userForestModifySchema.vbs

Schema change is typically a one way operation and requires a forest recovery to roll back so make sure you have necessary backups.

Step 2 – Prepare the certificate templates.

Prepare three certificate templates for the MIM CM agent accounts as per the guidelines in below article.

Prepare the MIM CM Agent Certificate Templates.

Step 3 – Install MIM CM on the Certificate Authority.

Browse to \MIM\Certificate Management\x64\ and execute setup.exe. Make sure MIM CM CA Files option is enabled while running the wizard on the Certificate authority as shown in the image below.

image

Step 4 – Install IIS on the MIM CM Portal server.

Install Web server(IIS) role from server manager.

Select below options along with the options that are by default enabled when installing IIS in the role services section of the wizard.

a. Common HTTP Features – HTTP Redirection

b. Health and Diagnostics – Request Monitor

c. Performance – Dynamic Content Compression

d. Security – Basic Authentication, Windows Authentication

e. Application Development – .NET Extensibility 4.5, ASP, ASP.NET 4.5, ISAPI Extensions.

f. Management Tools – IIS Management Console, IIS 6 Management Compatibility (All)

Step 5 – Install CM component on the MIM CM server.

Browse to \MIM\Certificate Management\x64\ and execute setup.exe. Make sure MIM CM Portal option is enabled while running the wizard on the server as shown in the image below.

image

Below is the virtual folder for your MIM CM portal. You can add a custom name if you’d like. Make sure you have the same name if installing multiple MIM CM portal servers.

image

Step 6 – Configure MIM CM.

Click Start and you will see the Certificate Management configuration wizard under newly installed applications. Execute it as an administrator. When running the configuration wizard, make sure you are running it as an account that has permissions to write to configuration and domain partition of resource forest. An enterprise admin is recommended.

image

You can use multiple CAs to issue certificates using MIM CM. Select one CA which will be the first CA and you can add the rest later.

image

Enter the name of the SQL server and the credentials which has rights to create the database.

image

Select the database name. You can use the default name or a friendly name. Again, make sure you are using the same name for the database if installing multiple MIM CM servers.

image

Since we have a two-way trust, we will see the trusted forest. Once we click on the checkbox next to the forest name it shows green as shown below. It will fail if there are issues with the trust, DNS or if the schema is not extended. Also you can change the Service Connection Point name to reflect the common name if you have two servers by clicking on change and setting the common name.

image

Select Windows Integrated Authentication.

image

Select the agent accounts to be used. You can create custom accounts and add them here by unchecking ‘Use the FIM CM default settings’ and clicking on custom accounts or you can let the MIM CM configuration wizard create the accounts automatically. If creating multiple MIM CM servers, we would recommend to create the accounts before hand.

image

Select the corresponding templates created in step 2.

image

Specify the name of SMTP server you want to use for email registration.

image

Click the configure button to start the configuration.

image

It will give a popup to require SSL. This can be done later by binding a certificate to IIS.

image

Click on the finish button to complete the configuration.

image

Once above steps are complete, we need to perform post-install tasks as was done for FIM CM. Refer to below article to complete the post-installation tasks for MIM CM.

Post-installation tasks

Your MIM CM server is configured for cross forest enrollment but we still have some more configuration to do on the Certificate Authority and Active Directory before we can issue the certificates\smart cards across the forest. That will be part 2 of this blog.

Lishweth KM

Import Database schema in Azure SQL DB from .SQL files programmatically with SQLCMD

$
0
0

Introduction – This blog post illustrates the method through which you can import your database schema and tables into empty Azure SQL DB ( PaaS ) programmatically. Currently azure SQL DB support import from BACPAC file in PowerShell and GUI but not from .SQL files.

Assumptions  – Here we assume that you already have .SQL files generated from on-premise data base and ready to upload to azure SQL DB.

Problem statement  – I had a requirement where I needed to import schema and tables into empty azure SQL DB from .SQL files. Currently Azure only provides import of BACPAC files out of the box from PowerShell and GUI and from SQL management studio but requirement here was to do it programmatically every time the ARM deployment script creates new azure SQL DB .

 

Resolution/Workaround – Below steps you should follow

1. Install SQLCMD on the VM/desktop from where you are running the script or deployment. SQLCMD cmdlets are used to deploy SQL files into Azure SQL DB. ODBC driver is required for installing SQLCMD

ODBC driverhttp://www.microsoft.com/en-in/download/details.aspx?id=36434

SQLCMDhttp://www.microsoft.com/en-us/download/details.aspx?id=36433

2. Save all the SQL files into a folder in local VM.

3. Get the public IP of you local VM/desktop using below code.

$IP = Invoke-WebRequest checkip.dyndns.com
$IP1 = $IP.Content.Trim()
$IP2 = $IP1.Replace("<html><head><title>Current IP Check</title></head><body>Current IP Address: ","")
$FinalIP = $IP2.Replace("</body></html>","")

4. Create a new firewall rule to connect to SQL server.

New-AzureRmSqlServerFirewallRule –FirewallRuleName $rulename  -StartIpAddress $FinalIP -EndIpAddress $FinalIP –servername $SQLservername –Resourcegroupname $resourcegroupname.

5. Save SQL server full name and sqlcmd path into a variable.

$Fullservername = $SQLservername + '.database.windows.net'
$sqlcmd = "C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE"

6. Save SQL server credentials and Azure SQL DB name in variables.

$username = “SQLusername”

$password = “SQLpassword”

$dbname = “databasename”

7. Run the below command for each SQL files if u want to import it sequentially.

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file1.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file3.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\filen.sql"

 

NOTE – You can accumulate  all the code and use it in deployment scripts along with functions and error logging

 

Thanks folks, Hope it is useful.

happy blogging

Migrating Performance Point To New SharePoint Site with different path

$
0
0

Introduction

When you migrate SharePoint site to a different location that is in a different site structure with a different path the SharePoint content will be migrated with the content itself. But some components will not work as expected because the site URL structure is different.

One of these components is Performance Point content.

Performance point content contains relevant links to Performance Point Connections in the Connections library. Because the path is changed this relevant link will be not valid. Also the Dashboards will have Reference links to other Performance Point contents like KPIs, Reports, Score cards, …etc. These Reference links will not be valid also.

This will cause the performance point web parts in the dashboard pages to show error that the data source does not exist or you don’t have a permission.

Migration Steps

The migration procedure consists of two major steps:

1. Exporting Performance Point content and connections in a Dashboard Designer Workspace (ddwx) file from the Source environment.

2. Import the ddwx file in the destination environment.

Export Dashboard Designer Workspace (ddwx) file from Source environment

1. Launch PerformancePoint dashboard designer.

2. Click on the PerformancePoint content list.

3. Select all the items in the list (Ctrl-A)

Export Content_1

 

4. Click on “Add Items” button on the ribbon under workspace section.

Export Content_2

 

5. All Items should be Added to the workspace area as shown in the image below

Export Content_3

6. Apply the steps 2 to 5 for all the Performance Point Content Lists and Connections Lists

7. Save the workspace by clicking on the office button Save workspace as.

        Export Content_4

 

Import Dashboard Designer Workspace (ddwx) file in the destination environment

In this step you will need to import the ddwx file to your destination environment.

1. Launch PerformancePoint dashboard designer.

2. Click on Import Items

Import Content_1

 

3. Map Performance Point Items to the corresponding item in your destination environment.

Import Content_2

Import Content_3

 

4. Select “Import data sources that already exists in the destination” Click On Next

Import Content_4

 

5. Wait until import is completed. Make Sure that all items are updated with no errors.

Import Content_6

Issues

I faced an issue one time after I finished the migration process. I found that there are some reports and KPIs that have the connections links still not corrected.  I figured out that there were more than one performance point reports and KPIs that has the same name in the same content list. In this case I found that one of the reports that has the same name is updated the other report was not updated. It was the same case also for the KPIs.

In this case I had to recreate the reports and KPIs that was not updated.

 

 

Issue with passing data from PowerShell to data bus in SCO, check this out !! this might help you

$
0
0

Hello readers! PowerShell scripts executed within a System Center Orchestrator runbook uses built-in “Run .NET Script” activity which by default uses PowerShell version 2.0. At many times we would require PowerShell script’s to be executed in version 3.0 and one of the way to do it is by executing PowerShell.exe from “Run .NET Script” activity. A script can be run in PowerShell Version 3.0, 64-bit PowerShell environment using C:\Windows\sysnative\WindowsPowerShell\v1.0\powershell.exe { <your code> }.

As part of System center orchestrator runbook workflow the data might be required to be published on data bus from “Run .NET Script” activity and for the above kind of scenarios a PowerShell custom object has to be created in PowerShell Version 3 . The below example briefs how this can be done using “Run .NET Script” activity.

[sourcecode language='powershell' ]
$inputobjs1 = C:\Windows\sysnative\WindowsPowerShell\v1.0\powershell.exe {

$SvchostPID = get-process | where { $_.ProcessName -eq 'svchost'} | select -ExpandProperty id
$NotepadPID = get-process | where { $_.ProcessName -eq 'notepad'} | select -ExpandProperty id

New-Object pscustomobject -Property @{
SvchostPID_OP = $SvchostPID
NotepadPID_OP = $NotepadPID
}

}

$SvchostPID =$inputobjs1.SvchostPID_OP
$NotepadPID = $inputobjs1.NotepadPID_OP
[/sourcecode]

 

Using the similar type of code, I was working upon a script wherein I was not able to retrieve the data stored in PS custom object out of PowerShell version 3.0 and after troubleshooting for hours, I was able to identify the issue.

Let me explain the issue using a sample script to provide an better understanding, the below script will connect to SCVMM server and gets information of Number of VCPU’s , Memory and Generation of a specific virtual machine.

[sourcecode language='powershell' ]
$inputobjs2 = .$env:windir\sysnative\windowspowershell\v1.0\Powershell.exe{

import-module virtualmachinemanager
Get-SCVMMServer -ComputerName scvmm2012R2
$vmvalues = get-vm testvm3 | select -Property cpucount, memory, generation

New-Object pscustomobject -Property @{
CPUCount_OP = $vmvalues.cpucount
Mem_OP = $vmvalues.Memory
Generation_OP = $vmvalues.Generation
}

}

$cpucount = $inputobjs2.CPUCount_OP
$Mem = $inputobjs2.Mem_OP
$generation = $inputobjs2.Generation_OP
[/sourcecode]

This script will not be able to pass the values to Orchestrator data bus using PS custom object and to make this script work, line number 3 has to be updated with storing SCVMM connection values to a variable.

$session = Get-SCVMMServer -ComputerName scvmm2012R2

This is required because whenever we connect to SCVMM server, an another shell is invoked and further script gets executed in the shell due to which the PS custom object created cannot be retrieved to Orchestrator data bus.

The updated script is as below:

[sourcecode language='powershell' ]
$inputobjs2 = .$env:windir\sysnative\windowspowershell\v1.0\Powershell.exe{

import-module virtualmachinemanager
$session = Get-SCVMMServer -ComputerName scvmm2012R2
$vmvalues = get-vm testvm3 | select -Property cpucount, memory, generation
$session.disconnect()

New-Object pscustomobject -Property @{
CPUCount_OP = $vmvalues.cpucount
Mem_OP = $vmvalues.Memory
Generation_OP = $vmvalues.Generation
}

}

$cpucount = $inputobjs2.CPUCount_OP
$Mem = $inputobjs2.Mem_OP
$generation = $inputobjs2.Generation_OP
[/sourcecode]

So folks, whenever you are connecting to applications through PS cmdlets which invokes its own shell and you require data to be passed back to Orchestrator, add a point to save the connection parameters to a variable.

Cheers !! 🙂

DHCP Pool creation in SCVMM 2012R2

$
0
0

 

Hello Readers! I thought of to put some notes on how do we create DHCP pool in System center virtual machine manager 2012 R2 and let you all know how simple this can be done

To start with , for creating a DHCP Pool you require fabric administrator access in SCVMM.

DHCP pool is created in Fabric pane, under Networking , Logical Networks. The logical networks section will list all networks defined in SCVMM which has to be designed so that it maps to actual physical network structure in the environment.

On the menu bar, select the option create IP Pool an window as shown in figure 1 pops up, provide the Name, Description for the pool you want to create. Select the logical network under which the pool has to be created.

 clip_image001

Under network site section, select “Use an existing network site” option if the network site is already defined in SCVMM or select ” create a network site” option if a network site has to be defined. Select “create a multicast IP address pool” option when you wanted to use multicast or broadcast option with the subnet which is a new feature introduced in SCVMM 2012 R2 / SCVMM 2012 SP1

 clip_image002

In the next pane, provide the range of Ipaddress which are to be part of DHCP pool by mentioning Starting Ipaddress and Ending Ipaddress.

You can exclusively mention the Ipaddress which are to be reserved for load balancer VIPs which are between the selected range of IP’s in the section “IP addresses reserved for load balancer VIPs” and also if any Ipaddress to be reserved / used for some other purpose and are part of selected range of IP’s same can be reserved by mentioning the Ipaddress in the section “IP addresses to be reserved for other users”

 clip_image003

In the next section, specify the gateway of the subnet as shown in below figure

clip_image004

 

In next section, provide the DNS servers and DNS suffix details which are to be used for the subnet

clip_image005

 

Review the settings, in the summary page and click finish

clip_image006

An job would be triggered in SCVMM for creating an IP pool in SCVMM and once completed pool created will be seen under the logical network it was created.

clip_image007

That’s all !! Very simple and ease to create DHCP pool in SCVMM and as well managing of IP’s is more automated with SCVMM when compared with normal windows server holding DHCP role.


Using Powershell to integrate SCOM Management Group to Operations Management Suite

$
0
0

Environment: SCOM 2012 R2 with Update Rollup 11

 

Register SCOM Management Group:

On-premises SCOM Management Groups can be integrated to Operations Management Suite (OMS) from the SCOM Console under the administration Tab. Here, I am demonstrating a way to integrate SCOM to OMS via PowerShell cmdlets. We will integrate the SCOM to OMS using the cmdlet “Register-SCAdvisor“.

This cmdlet needs a certificate pfx file from the OMS workspace (not the certificate from the azure portal). In order to obtain this certificate, suffix “DownloadCertificate.ashx ” to your OMS workspace URL root. So the new URL will look like:

https://<workspacename>.portal.mms.microsoft.com/DownloadCertificate.ashx

Eg: in case of the workspace I am using for this demo, it will be: https://deepuomsdemo.portal.mms.microsoft.com/DownloadCertificate.ashx

Once you enter this URL to a browser after logging in to your OMS portal, you will be prompted to download the .pfx file. Save it to a folder in your SCOM management server. Before proceeding with the integration make sure that the necessary management packs for the OMS integration are imported into the Management server. For Update Rollup (UR 11), the following Management Packs (version 7.1.10226.1239) under the location “C:\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\Management Packs for Update Rollups” need to be imported:

  • Microsoft.SystemCenter.Advisor.Internal.mpb
  • Microsoft.SystemCenter.Advisor.mpb
  • Microsoft.SystemCenter.Advisor.Resources.ENU.mpb

You can use Import-SCOMManagementPack command to import the above management packs as shown below. To see the changes brought in by the above management packs, close and reopen the SCOM console.


In most of the cases, the SCOM server will be communicating to the internet/oms via proxy. In that case, we can configure the proxy settings in the SCOM console under Administration -> Operations Management Suite -> Connection –> Configure Proxy Server. Or this can be done via Powershell with the Set-SCAdvisorProxy cmdlet as shown below. For Proxy servers, that needs authentication, “System Center Advisor Run As Profile Proxy” Profile
needs to be configured with the RunAs Account.

 


Now open the Operations Manager Shell and run the following command:

Register-SCAdvisor -CertificatePath <path of pfx file>

The above command will return “True” if the registration with OMS is successful. Eg:


Note that the SCOM UR11 adds support to register the Operations Manager Management group to OMS workspaces in regions other than Eastern US by using an additional optional parameter (-SettingServiceUrl), which is the URL for setting the service in the region of the workspace.

Tip: If SettingServiceUrl is not specified, the workspace is assumed to be in the Eastern US. You will get the following error message if the workspace is in a region other than Eastern US and is not explicitly mentioned in the command.


Now the SCOM Management Group is registered with the OMS Portal. However, OMS will not collect the monitoring data from the SCOM agents unless they are added as Managed Computers in the Administration Tab of SCOM Console under the Operations Management Suite container. We can add the agents to the OMS via the PowerShell cmdlet “Add-AdvisorAgent”. Eg:


One the agents are added, they will start syncing with OMS and we need to verify the OMS integration.

 

Verification:

We can verify the integration after a couple of hours by the following methods.

  1. Via PowerShell: Run the cmdlet Get-SCAdvisorAgent to get the list of servers monitored by SCOM which are in turn syncing the data with the OMS portal.EG:


     

  2. Via OMS Portal: Open the OMS workspace, go to Settings -> Connected Sources -> System Center. You will see the SCOM Management group listed like below. It will give you the name of the management group, Number of SCOM agents connected to the OMS and the last data update time.


     

  3. Via SCOM Console: Navigate to Monitoring -> Operations Management Suite -> Health State. Here we can see the health status of the SCOM management servers connected to the OMS portal. Eg:


 

Unregister SCOM from OMS.

To unregister SCOM Management Group from OMS, please refer this excellent blog by Kevin:

https://blogs.technet.microsoft.com/kevinholman/2016/03/26/how-to-remove-oms-and-advisor-management-packs/

 

Cheers.

Decommission of Exchange Server 2010

$
0
0

To perform Exchange Server 2010 decommission follow the below procedure, the account you use must be delegated membership in the Exchange Full Administrator role on Exchange Servers.

  1. Move all legacy Exchange 2010 mailboxes to newly deployed Exchange server 2013/2016 in the organization.
  2. Move all content from the public folder database on the Exchange 2010 server to a public folder database on an Exchange 2013/2016 server in the organization.
  3. Remove all replicas of PFs on the 2010 side using 2010 management tools so that all PFs in the 2010 hierarchy have only one replica. This should be do able even though you migrated the PFs to 2013/2016.
  4. Remove the public folder mailbox and stores on the Exchange 2010 server
  5. On Exchange 2010 servers, for each offline address book (OAB), move the generation process to an Exchange 2013/2016 server. Ensure 2013/2016 is the one generating/serving OABs for users.
  6. Remove all added DB copies of mailbox DBs so each DB has a single copy in Exchange Server 2010.
  7. Remove all nodes from any existing Exchange Server 2010 Database Availability Group
  8. Delete the Exchange Server 2010 Database Availability Group
  9. Optional: Set the RpcClientAccessServer value of all 2010 DBs to the FQDN of their server
  10. Optional: Remove the CAS Array Object(s)
  11. Check the SMTP logs to see if any outside systems are still sending SMTP traffic to the servers via hard coded names.
  12. Start removing mailbox databases to ensure no arbitration mailboxes still exist on Exchange 2010 servers
  13. Verify that Internet mail flow is configured to route through your Exchange 2013/2016 transport servers
  14. Verify that all inbound protocol services (Microsoft Exchange ActiveSync, Microsoft Office Outlook Web App, Outlook Anywhere, POP3, IMAP4, Auto discover service, and any other Exchange Web service) are configured for Exchange 2013/2016.
  15. Start uninstalling Exchange Server 2010 and reboot the server.

 

Docker – Fails to create container on Windows 10 – Error response from daemon container..encountered an error during start

$
0
0

Thought to share findings that came across on fixing the below issue. May be it can help someone while working in docker with Windows 10.

 Issue: Unable to create docker container on Windows 10 Version 1607

Error response from daemon container..encountered an error during start

Workaround: docker run -it –rm –net=none microsoft/nanoserver cmd

 Finding & Cause:

Gather the network trace using command – netsh trace start globallevel=7 provider=Microsoft-Windows-Host-Network-Service  report=di on viewing the logs we found the message “HNS failed to create vmswitch port with error ‘0x80070003’, switch id = ‘c502a850-2f21-4d55-9879-14cc66f69193’, port id = ‘e2e3b5ba-1de9-4650-a0e0-50276c0e2cb8’ and type = ‘Value_3’” 

Checked the VMSwitch found the NAT switch is missing  ( Normally deleting and re-creating vmswitch based upon Hyperv VM’s requirement as it’s in Lab)

get-vmswitch

Checked the Container network and found the NAT network is in second order

get-containernetwork

Solution: Follow the below steps that will help you to get rid of the error

Get-containernetwork | Remove-Containernetwork -force

Restart-service hns

Restart-service docker

Get-containernetwork

Get-vmswitch

Get-netnat

And finally created the container it worked successfully

Lesson Learnt: Whenever you play on VMSwitch with the Hyper-V it will also impact over the docker J

Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM’s Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select ‘Manage‘ from Dropdown menu.

Under the Data Source Type, Select “Microsoft SQL Server

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select “Windows Integrated Security” under “Connect Using” options:

Click on “Test Connection” to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.

Difference between Azure Service Manager and Azure Resource Manager

$
0
0

ASM

ARM

 

 

This is an old portal which provides Cloud
service for Iaas Workload and few specific Paas Workload

They are new portal provides service for all
Workload of IaaS and PaaS

Access over the Url:
https://manage.windowsazure.com
which  termed as V1 portal.

Access over the Url: https://portal.azure.com
which  termed as V2 portal  having Blade design Portal View

Azure Service Manager are XML driven REST API

Azure Service Manager are JSON driven REST API

Had a concept of Affinity Group which has been
deprecated

They have container concept called Resource
Group which is logical set of correlated cloud resources which can span
multiple region and services

Private Azure Portal can be built using
Windows Azure Pack

Private Azure Portal can be built using  Azure Stack

Removal or Deletion is not easy as Azure Resource
Manager

Removal of resource is easier by deleting the
resource group (RSG) which will help to delete all the resource present in
the RSG

Deployment can be performed using PowerShell
script

Deployment can be performed using ARM
templates which provide simple orchestration and rollback function. They have
their own PowerShell Module

Features and function are not available

Role Based Access Control Feature is Present

Features and function are not available

Resource from the resource group can be moved
between within the same region

Features and function are not available

Resource Tagging which is name-pair value
assigned to resource group which can have up to 15 tags per resources

Features and function are not available

Massive and Parallel Deployment of VM’s
possible with Asynchronous Operations

Features and function are not available

We can have custom policy created to restrict
the operation that can be performed

Features and function are not available

Azure Resource Explorer  – https://resources.azure.com/ which helps
for more understanding on resources and for deployment

Features and function are not available

 Resource Locks provides the policy to
enforce lock level that prevent from accident deletion

 

For more details refer : https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-deployment-model

Control code execution between Load and Webtests

$
0
0
Requirement
As a performance tester, I would like to
Execute a specific plugin/block of Code during Webtest and skip the same code, when I execute the load test(Which has this WebTest) and ViceVersa
This need to done without making any configuration changes while running web or loadtest.
We can use below sample codesnippet to accomplish the same
********Execute specific block of code during WebTest and loadTest*********
public class MySampleWebTest: WebTest
{
        public MySampleWebTest()
{
this.PreAuthenticate = false;
this.Proxy = “default”;
}
        public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
if (this.Context.ContainsKey(“$LoadTestUserContext”))
{
// ***********Below code will be automatically executed when we run load Test***********
//this.Context.Add(“Debug”, “InLoadTestMode”);
//
}
else
{
// ***********Below code will be automatically executed when we run WebTest***********
//this.Context.Add(“Debug”, “InWebTestMode”);
}
}
}
********Execute specific plugin of code during WebTest and loadTest*********
//  Execte a plugin that reads test data from Csv file during LoadTest and a hardcoded value during webtest execution
    public class MySampleWebTest: WebTest
{
// Plugin initialization, to be executed only during LoadTest not during webTest
private SetUnqiueLoginName testPlugin0 = new SetUnqiueLoginName();
        public MySampleWebTest()
{
this.PreAuthenticate = true;
this.Proxy = “default”;
this.StopOnError = true;
if (this.Context.ContainsKey(“$LoadTestUserContext”))
{
// Below plugin code will be executed during LoadTest that reads data from Csv file
this.PreWebTest += new EventHandler<PreWebTestEventArgs>(this.testPlugin0.PreWebTest);
}
else
{
// Below plugin code will be executed during Webtest that reads hard coded value
this.Context.Add(“UniqueLoginName”, “smoketestuser”);
}

Read data from multiple data sources using Visual studio WebTest

$
0
0

Requirement

As a performance tester, I should be able to read the data from any number of the datasource based on the my requirement
(Ex: Different environments Test,UAT,Prod)
We can accomplish the same using below code snippet
    [DataSource(“TestDataSource”, “System.Data.SqlClient”, Constants.TestDataConnectionString, Microsoft.VisualStudio.TestTools.WebTesting.DataBindingAccessMethod.Sequential,
Microsoft.VisualStudio.TestTools.WebTesting.DataBindingSelectColumns.SelectAllColumns, “Users”)]
[DataBinding(“TestDataSource”, “Users”, “userid”, “TestDataSource.Users.userid”)]
[DataSource(“UATDataSource”, “System.Data.SqlClient”, Constants.TestDataConnectionString, Microsoft.VisualStudio.TestTools.WebTesting.DataBindingAccessMethod.Sequential,
Microsoft.VisualStudio.TestTools.WebTesting.DataBindingSelectColumns.SelectAllColumns, “Users”)]
[DataBinding(“UATDataSource”, “Users”, “userid”, “UATDataSource.Users.userid”)]
public class CreateUser : WebTest
{
public CreateUser ()
{
this.PreAuthenticate = true;
this.Proxy = “default”;
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
if (Env == “Test)
{
string TestUser= Context[“TestDataSource.Users.userid”].ToString();
}
else
{
string UatUser = Context[“UATDataSource.Users.userid”].ToString();
}
}

Import Database schema in Azure SQL DB from .SQL files programmatically with SQLCMD

$
0
0

Introduction – This blog post illustrates the method through which you can import your database schema and tables into empty Azure SQL DB ( PaaS ) programmatically. Currently azure SQL DB support import from BACPAC file in PowerShell and GUI but not from .SQL files.

Assumptions  – Here we assume that you already have .SQL files generated from on-premise data base and ready to upload to azure SQL DB.

Problem statement  – I had a requirement where I needed to import schema and tables into empty azure SQL DB from .SQL files. Currently Azure only provides import of BACPAC files out of the box from PowerShell and GUI and from SQL management studio but requirement here was to do it programmatically every time the ARM deployment script creates new azure SQL DB .

 

Resolution/Workaround – Below steps you should follow

1. Install SQLCMD on the VM/desktop from where you are running the script or deployment. SQLCMD cmdlets are used to deploy SQL files into Azure SQL DB. ODBC driver is required for installing SQLCMD

ODBC driverhttp://www.microsoft.com/en-in/download/details.aspx?id=36434

SQLCMDhttp://www.microsoft.com/en-us/download/details.aspx?id=36433

2. Save all the SQL files into a folder in local VM.

3. Get the public IP of you local VM/desktop using below code.

$IP = Invoke-WebRequest checkip.dyndns.com
$IP1 = $IP.Content.Trim()
$IP2 = $IP1.Replace("<html><head><title>Current IP Check</title></head><body>Current IP Address: ","")
$FinalIP = $IP2.Replace("</body></html>","")

4. Create a new firewall rule to connect to SQL server.

New-AzureRmSqlServerFirewallRule –FirewallRuleName $rulename  -StartIpAddress $FinalIP -EndIpAddress $FinalIP –servername $SQLservername –Resourcegroupname $resourcegroupname.

5. Save SQL server full name and sqlcmd path into a variable.

$Fullservername = $SQLservername + '.database.windows.net'
$sqlcmd = "C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE"

6. Save SQL server credentials and Azure SQL DB name in variables.

$username = “SQLusername”

$password = “SQLpassword”

$dbname = “databasename”

7. Run the below command for each SQL files if u want to import it sequentially.

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file1.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file3.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\filen.sql"

 

NOTE – You can accumulate  all the code and use it in deployment scripts along with functions and error logging

 

Thanks folks, Hope it is useful.

happy blogging

Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM’s Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select ‘Manage‘ from Dropdown menu.

Under the Data Source Type, Select “Microsoft SQL Server

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select “Windows Integrated Security” under “Connect Using” options:

Click on “Test Connection” to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.

Import Database schema in Azure SQL DB from .SQL files programmatically with SQLCMD

$
0
0

Introduction – This blog post illustrates the method through which you can import your database schema and tables into empty Azure SQL DB ( PaaS ) programmatically. Currently azure SQL DB support import from BACPAC file in PowerShell and GUI but not from .SQL files.

Assumptions  – Here we assume that you already have .SQL files generated from on-premise data base and ready to upload to azure SQL DB.

Problem statement  – I had a requirement where I needed to import schema and tables into empty azure SQL DB from .SQL files. Currently Azure only provides import of BACPAC files out of the box from PowerShell and GUI and from SQL management studio but requirement here was to do it programmatically every time the ARM deployment script creates new azure SQL DB .

 

Resolution/Workaround – Below steps you should follow

1. Install SQLCMD on the VM/desktop from where you are running the script or deployment. SQLCMD cmdlets are used to deploy SQL files into Azure SQL DB. ODBC driver is required for installing SQLCMD

ODBC driverhttp://www.microsoft.com/en-in/download/details.aspx?id=36434

SQLCMDhttp://www.microsoft.com/en-us/download/details.aspx?id=36433

2. Save all the SQL files into a folder in local VM.

3. Get the public IP of you local VM/desktop using below code.

$IP = Invoke-WebRequest checkip.dyndns.com
$IP1 = $IP.Content.Trim()
$IP2 = $IP1.Replace("<html><head><title>Current IP Check</title></head><body>Current IP Address: ","")
$FinalIP = $IP2.Replace("</body></html>","")

4. Create a new firewall rule to connect to SQL server.

New-AzureRmSqlServerFirewallRule –FirewallRuleName $rulename  -StartIpAddress $FinalIP -EndIpAddress $FinalIP –servername $SQLservername –Resourcegroupname $resourcegroupname.

5. Save SQL server full name and sqlcmd path into a variable.

$Fullservername = $SQLservername + '.database.windows.net'
$sqlcmd = "C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE"

6. Save SQL server credentials and Azure SQL DB name in variables.

$username = “SQLusername”

$password = “SQLpassword”

$dbname = “databasename”

7. Run the below command for each SQL files if u want to import it sequentially.

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file1.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file3.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\filen.sql"

 

NOTE – You can accumulate  all the code and use it in deployment scripts along with functions and error logging

 

Thanks folks, Hope it is useful.

happy blogging

Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM’s Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select ‘Manage‘ from Dropdown menu.

Under the Data Source Type, Select “Microsoft SQL Server

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select “Windows Integrated Security” under “Connect Using” options:

Click on “Test Connection” to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.

Import Database schema in Azure SQL DB from .SQL files programmatically with SQLCMD

$
0
0

Introduction – This blog post illustrates the method through which you can import your database schema and tables into empty Azure SQL DB ( PaaS ) programmatically. Currently azure SQL DB support import from BACPAC file in PowerShell and GUI but not from .SQL files.

Assumptions  – Here we assume that you already have .SQL files generated from on-premise data base and ready to upload to azure SQL DB.

Problem statement  – I had a requirement where I needed to import schema and tables into empty azure SQL DB from .SQL files. Currently Azure only provides import of BACPAC files out of the box from PowerShell and GUI and from SQL management studio but requirement here was to do it programmatically every time the ARM deployment script creates new azure SQL DB .

 

Resolution/Workaround – Below steps you should follow

1. Install SQLCMD on the VM/desktop from where you are running the script or deployment. SQLCMD cmdlets are used to deploy SQL files into Azure SQL DB. ODBC driver is required for installing SQLCMD

ODBC driverhttp://www.microsoft.com/en-in/download/details.aspx?id=36434

SQLCMDhttp://www.microsoft.com/en-us/download/details.aspx?id=36433

2. Save all the SQL files into a folder in local VM.

3. Get the public IP of you local VM/desktop using below code.

$IP = Invoke-WebRequest checkip.dyndns.com
$IP1 = $IP.Content.Trim()
$IP2 = $IP1.Replace("<html><head><title>Current IP Check</title></head><body>Current IP Address: ","")
$FinalIP = $IP2.Replace("</body></html>","")

4. Create a new firewall rule to connect to SQL server.

New-AzureRmSqlServerFirewallRule –FirewallRuleName $rulename  -StartIpAddress $FinalIP -EndIpAddress $FinalIP –servername $SQLservername –Resourcegroupname $resourcegroupname.

5. Save SQL server full name and sqlcmd path into a variable.

$Fullservername = $SQLservername + '.database.windows.net'
$sqlcmd = "C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE"

6. Save SQL server credentials and Azure SQL DB name in variables.

$username = “SQLusername”

$password = “SQLpassword”

$dbname = “databasename”

7. Run the below command for each SQL files if u want to import it sequentially.

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file1.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file3.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\filen.sql"

 

NOTE – You can accumulate  all the code and use it in deployment scripts along with functions and error logging

 

Thanks folks, Hope it is useful.

happy blogging

Viewing all 196 articles
Browse latest View live




Latest Images