Quantcast
Channel: Secure Infrastructure Blog

Getting Started with Terraform on Azure DevOps

0
0

Introduction

As with most things, there are a number of ways to utilize Azure DevOps to orchestrate your management of Azure Resources through terraform. This post will walk through a way that I have found to be successful and relatively easy to maintain. It will not however describe the many benefits of using an Infrastructure as Code approach, as that is a much broader topic.

To follow along with the example below, please ensure you have the Multistage pipeline feature enabled, this is still in preview as of the publishing of this post.

Contents

Prior to using terraform to deploy infrastructure on Azure, there are a few setup steps. The first is to create an Azure Resource Manager service connection within Azure DevOps. From there, I recommend using a script to setup needed variables in KeyVault, but this can be accomplished through the portal, powershell, or through individual az cli commands.

The script I use for this creates a resource group, a keyvault, and a service principal. The service principal will be used by Terraform for it’s interactions with Azure Resource Manager.

The KeyVault you created can then be used in Azure DevOps by creating a variable group that is linked to it.

I recommend using a consistent folder structure for your pipeline and terraform configuration. This allows you to more easily maintain your code, but also significantly improves the usability for future developers. In my case, I like to have a pipelines folder that contains the main pipeline.yml for orchestrating the overall process and a templates folder that contains my pipeline templates.

The initial section of the pipeline are environment independent actions that should only need to be performed once. This is similar to build and unit test phase for an typical application deployment.

For simplicity, I am using template files for the individual steps. For the Setup phase, this includes formatting, init, and validation.

The fun part is the actual deployment. This can be separated into stages for each of the different environment you want to deploy resources. The first job in the deployment is plan, and as you might imagine runs terraform plan. The second job is apply, and this runs terraform apply.

Conclusion

There are many ways to deploy Azure Resources. Hopefully this post provides some ideas on how you can use an Infrastructure as Code approach to deploy using Azure DevOps and Terraform. This link shows a working example that utilizes this approach.


PowerShell: Active Directory Cleanup – Part 1

0
0

Hello World, Scott Williamson Senior Premier Field Engineer here. As a PFE, I frequently work with customers who ask how to cleanup Active Directory of old objects and data. To assist them automate cleanup I have wrote several PowerShell scripts, functions, and workflows and I want to share them in this blog series.

This first of these scripts is checking for and cleanup of old Duplicate Computers. Duplicate computers are rarely seen in the newer versions of Active Directory (AD) unless you are having replication issues between domain controllers. Do you have any Duplicate computers in AD? Many customers still have some and don’t know it.

# PowerShell to Report Duplicate Computers
cls
$CDate = Get-Date -format "yyyyMMdd" 
$ScriptPath = Split-Path $MyInvocation.MyCommand.Path -Parent
$ComputerPropsCustom = $("Enabled","Description","LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")
$ComputerPropsSelect = $("Name","SamAccountName","Enabled","DistinguishedName",@{Name="CreatedBy";Expression={$(([ADSI]"LDAP://$($_.DistinguishedName)").psbase.ObjectSecurity.Owner)}},"LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")
        
$DuplicateComputers = Get-ADComputer -Filter {SamAccountName -like "*DUPLICATE-*"} -Properties $ComputerPropsCustom | Select-Object $ComputerPropsSelect | Sort-Object Name
$DuplicateComputers | Export-Csv -Path "$ScriptPath\$($CDate)_DuplicateComputers.csv" -NoTypeInformation
$DuplicateComputers

So let me walk through this short script line by line.

  • Clear the screen with “cls”.
  • Set a variable $CDate with the current date formatted as yyyyMMdd. Example 20191213.
  • Set $ScriptPath to the location of the script we are running.
  • Set $ComputerPropsCustom to a list of custom properties we want to pull.
  • Set $ComputerPropsSelect to all the properties we want in the output in the order desired. Notice we also have a custom defined property CreatedBy which is doing an LDAP lookup on the object to find who created it.
  • Set $DuplicateComputers to the output of the Get-ADComputer cmdlet. We filter the SamAccountName for only computer objects with “DUPLICATE-” in the name. Notice where we use $ComputerPropsCustom and $ComputerPropsSelect. In addition we sort the output by Name.
  • Next we export the $DuplicateComputers to a csv file in the script directory named as as the date underscore and DuplicateComputers.
  • The final line just sends the $DuplicateComputers contents to the screen for us to view.

Notice that the script only pulls information and doesn’t do any actual cleanup yet. Best practice is to do all the gathering, verify several times that you are only getting the data expected, and then add in the code to do the cleanup. Below is the final code to add to the bottom of the script above to perform the computer object removal.

# Uncomment next line to remove duplicate computers with no operating system
$DuplicateComputers | ? {$_.OperatingSystem -eq $null} | % {Remove-ADComputer -Identity $($_.DistinguishedName)}

Let’s walk through this last couple lines.

  • The # line is a comment. I normally # comment out the action code of a script while I’m writing and perfecting the code. Once 100% sure I’m only getting the data I expect I remove the # from the action code. Usually I’m still a little hesitant so I usually add -whatif to the end of the action code for one final let’s test.
  • We send or pipe $DuplicateComputers to a filter that only selects objects without an operating system then pipes that into the Remove-ADComputer cmdlet to do the cleanup.

So although this is a short script it has a hint of some advanced code such as:

  • Date Formatting
  • Determining Script location
  • Setting Properties Arrays
  • Custom Property
  • Cmdlet filtering
  • Object Sorting
  • Exporting to CSV
  • ? = Where
  • % = ForEach-Object

Stay tuned for the next part in this series.

Minecraft on Azure

0
0

Introduction

Despite approaching it’s 10th anniversary, Minecraft remains an incredibly popular game with both children and adults. There are many options to play Minecraft — locally, on a Minecraft hosted realm, or on a public server. In some cases, you may want to retain more control and run Minecraft on your own server. This allows you full control over who has access, as well as the ability to use a variety of community plugins for different gameplay options. This post will discuss the basics of running your own Minecraft server on Azure.

Content

The first step in getting setup to run your Minecraft server on Azure, is to login to your Azure account. If you don’t have one already, you can easily sign-up for a trial. Within your Azure subscription, I recommend creating a resource group just for use with the Minecraft related resources.

Within this new resource group you will then create a virtual machine. This virtual machine has very few special requirements. You’ll want to ensure you’re using the latest official Ubuntu image, providing an ssh key and allowing access to port 22.

For the most part you can accept the defaults, but you probably want to be careful with the auto-shutoff settings depending on your personal preferences on play time. After creating the virtual machine (vm), select the vm, and click on the network blade. You’ll want to make sure that the access on port 22 is limited to your ip address, and add access on port 25565. If you know the ip address range for all of the people who will play on this server, you can limit it here as well.

At this point, you will probably want to go back to the main vm screen, and click on configure DNS. This allows you to add a dns prefix; making it easier to remember and share the address of your server.

Now that the VM is up, all that remains is to download and configure minecraft. Begin by connecting via ssh to the vm. This can be done via Azure Cloud Shell or a terminal on your machine. You will need the ssh private key you setup as part of creating the vm.

Upon logging into the machine, copy the contents of this gist to the tmp directory, and then run the following commands. The minecraft server.jar can be found here


sudo su -
cp /tmp/<gist you just copied> /etc/systemd/system/minecraft.service
apt update && apt upgrade -y
apt install default-jre -y
adduser --system --home /minecraft minecraft
addgroup --system minecraft
adduser minecraft minecraft
systemctl enable minecraft.service
cd /minecraft
wget <lastest minecraft server.jar> 
echo eula=true > eula.txt
chown -R minecraft:minecraft ../minecraft
systemctl start minecraft
journalctl -u minecraft -f <allows you to look at the logs>

Conclusions

At this point you have a working minecraft server, and can connect to it as normal from your minecraft application. There are numerous potential plugins and options that you can consider.

Next Steps

While we do have our server up and running, there are number of actions you’ll probably want to text next. These may be covered in a future post.

  • Add Azure Firewall / Load Balancer
  • Move server creation to Azure DevOps
  • Add backup of world files
  • Use paper and add some plugins

PowerShell: Active Directory Cleanup – Part 2 – Spacey Computer Names

0
0

Introduction

Hello again, Scott Williamson back with the next installment in the series “PowerShell: Active Directory Cleanup”. For this installment we going to take a look at a script that finds computers that have a space in their name. Per RFC 1123 DNS host names cannot contain white space (blank) in their names. This is the most common issue I’ve found when computers are entered manually by IT administrators. When typing we get so used to adding a space between words that we accidentally do it when creating computer names. Usually the space is at the end of the computer name so it’s not easily spotted. This script looks searches Active Directory for computers with a space in their name, writes them to a CSV file, and displays them to the screen for review.

Find Computers with Space(s) in the Name

# Clear the Screen
cls

# This section sets the common variables for the script.
# Get the current date and format it as yyyyMMdd.  The 4 digit year, 2 digit Month and 2 digit day.  Exmaple 20191213
$CDate = Get-Date -format "yyyyMMdd" 

# Get the location this script was executed from.
$ScriptPath = Split-Path $MyInvocation.MyCommand.Path -Parent

# Set an array to the additonal Computer Properties we need.
$ComputerPropsCustom = $("Enabled","Description","LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")

# Set an array to all the computer properties we want to display.
$ComputerPropsSelect = $("Name","SamAccountName","Enabled","DistinguishedName",@{Name="CreatedBy";Expression={$(([ADSI]"LDAP://$($_.DistinguishedName)").psbase.ObjectSecurity.Owner)}},"LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")

# Search Active Directory for computer objects with a space in their name and sort them by Name.
$ComputerWithSpaces = Get-ADComputer -Filter {Name -like "* *"} -Properties $ComputerPropsCustom | Select-Object $ComputerPropsSelect | Sort-Object Name

# Export the results to a CSV file for review.
$ComputerWithSpaces | Export-Csv -Path "$ScriptPath\$($CDate)_ComputersWithSpaces.csv" -NoTypeInformation

# Display the results to the screen.
$ComputerWithSpaces

I included comment lines above each step to explain what the next line is doing. When writing PowerShell scripts it’s extremely helpful to add comments so that others viewing your scripts can understand what they are doing. These comments will also help you a year or two from now when you go back to use or modify the script.

Summary

Notice the similarities between the script above and the one from Part One. They both have very similar code with the exception of the filter and result variable names. Stay tuned for Part 3 of the series.

Series Links:

Field Notes: Azure AD Connect – Migrating from AD FS to Password Hash Synchronization

0
0

This is a continuation of a series on Azure AD Connect. I started off this Azure AD Connect series by going through the express installation path, where the password hash synchronization (PHS) sign-in option is selected by default. This was followed by the custom installation path where I selected pass-through authentication (PTA) as a user sign-in option. The third blog post on user sign-in was configuring federation with Active Directory Federation Service (AD FS). Links to these are provided in the summary section below.

Here, I go through migrating from AD FS to PHS. You may want to do this to reduce complexity and server footprint in your environment.

Before we begin

I am running the latest version of Azure AD Connect that I downloaded from http://aka.ms/aadconnect. At a minimum, version 1.1.819.0 is required to successfully complete the migration using the process we are going to cover. See Azure AD Connect: Version release history to keep track of the versions that have been released, and to understand what the changes are in the latest version.

Federation is currently enabled for one domain. PHS is also enabled and the required permissions for the on-premises directory are already in place as per Azure AD Connect: Accounts and permissions (Replicate Directory Changes | Replicate Directory Changes All).

We’ll be using Azure AD Connect to perform the migration as federation was configured using it. One of the ways that can be used to confirm that AD FS was setup through Azure AD Connect is to open the federation configuration task under manage federation.

Information such as the federation service name, service account, certificate details would be shown here. Be sure to have documented your setup and have a valid backup before you proceed in your environment.

Migrating using Azure AD Connect

The swing itself is pretty straightforward. All we do is launch Azure AD Connect and select configure. At the additional tasks page, we select change user sign-in and click next to proceed.

We then connect Azure AD as normal by providing a Global Admin user name and password. Under user sign-in, we select password hash synchronization. We also need to confirm (by checking the box) that our intention is to convert from federated to managed authentication. Enable single sign-on is turned on by default, and we’ll leave tick-box checked.

Azure AD domains that are currently federated will be converted to managed and user passwords will be synchronized with Azure AD. This process may take a few hours and cause login failures.

Clicking next takes us to the enable single sign-on page, where we are required to enter a domain administrator account to configure the on-premises directory for use with SSO.

If everything goes well, the cross next to the enter credentials button will change to a green icon with a check mark. The next button will also be enabled. There is a problem in our case: an error occurred while locating computer account.

This is also highlighted in the trace file (C:\ProgramData\AADConnect\trace-*.txt).

Our workaround for now is to delete the AZUREADSSOACC computer account in AD DS that was created by a previous installation. I’ll cover this case in detail in a future post.

That’s it! The conversion happens once we go through the single sign-on page. Another look at Azure AD and voila, federation is now disabled, and seamless single sign-on is enabled for idrockstar.co.za.

A quick test performed by accessing http://aka.ms/myapps reveals that we are no longer redirected to AD FS, but authentication takes place in Azure AD.

Summary

We have just quickly gone through the process of migrating sign-on in Azure AD from federation with AD FS to PHS. Be sure to check the deployment considerations if you plan to perform the migration in your environment.

Related posts

Till next time…

How to enable Internet and vNET connectivity for nested VMs in Azure

0
0

For a full walk-through of this setup, please watch the video at the end of this post.

Greetings readers,

Hyper-V nested virtualization in Azure has unlocked different scenarios and use cases such as sandbox environments, running unsupported operating systems or legacy applications that require specific features that are not natively supported in Azure, think about that application that has licenses tied to a MAC address for example.

In certain scenarios, you want those nested VMs to connect to the Internet or other VMs in Azure, however, due to restrictions on the network fabric, it is not possible to create an external switch and give VMs direct access to the host’s physical network. A solution to this is to configure NAT so that VMs can access the Internet with the host NATed public IP and also routing to enable connectivity to other VMs in Azure. In this blog post, I will walk you through the process of configuring nested VMs networking to achieve those goals.

Build the virtual network

We will need to build a vNet with two subnets, one for the host LAN traffic which may include other Azure VMs as well and another one for Internet traffic where we will enable NAT.

Example:

LabVnet – 10.2.0.0/16 (Main address space)

NAT Subnet – 10.2.0.0/24

LAN Subnet – 10.2.1.0/24

Later on, we will use 10.2.2.0/24 virtual address space for the nested VMs running inside the Hyper-V host.

Build the Hyper-V Host VM

  • Create a new Azure VM that will be your Hyper-V host. Make sure you pick a size that supports nested virtualization and connect the first network adapter to the NAT subnet as you build the VM. It is important that the first adapter is connected to the NAT subnet because by default all outbound traffic is sent through the primary network interface.
  • Once the VM is provisioned, add a secondary network adapter and connect it to the LAN subnet

Configure the Hyper-V Host

Install the necessary roles for the next steps:

  • Hyper-V
  • DHCP
  • Routing (RRAS)

DHCP will be used to automatically assign IP addresses to the nested VMs and RRAS will be used to route traffic between the nested VMs and other Azure VMs as well as provide NAT for Internet access.

Install-WindowsFeature -Name Hyper-V,DHCP,Routing -IncludeManagementTools -Restart

Create a virtual switch that will be used by the nested VMs as a bridge for NAT and Routing

New-VMSwitch -Name "Nested" -SwitchType Internal
New-NetIPAddress –IPAddress 10.2.2.1 -PrefixLength 24 -InterfaceAlias "vEthernet (Nested)"

Rename the network adapter names on the Hyper-V host to match the subnet names in Azure, this will make it easier to identify the networks when we are configuring routing. In this example, this is what the host network settings look like after creating the switch.

Configure DHCP

Create a DHCP scope that will be used to automatically assign IP to the nested VMs. Make sure you use a valid DNS server so the VMs can connect to the internet. In this example, we are using 8.8.8.8 which is Google’s public DNS.

Add-DhcpServerV4Scope -Name "Nested" -StartRange 10.2.2.2 -EndRange 10.2.2.254 -SubnetMask 255.255.255.0
Set-DhcpServerV4OptionValue -DnsServer 8.8.8.8 -Router 10.2.2.1

Configure RRAS

First, we will enable NAT for Internet access. Open the Routing and Remoting Access console, use custom configuration, select NAT and Routing, once the service is started, navigate to IPV4 , right-click NAT and select New Interface. Now select the interface that matches your NAT subnet and enable NAT as follows:

We will now configure static rules to routes to allow traffic from nested VMs to other VMs connected to the Azure virtual network.

Under IPv4, right-click static routes, select new static route and create routes as follows:

This route is to allow the primary interface to respond to traffic destined to it out of its own interface. This is needed to avoid an asymmetric route.

Create a second route to route traffic destined to the Azure vNet. In this case, we are using 10.0.0.0/16 which encompasses our labvnet including the Hyper-V LAN subnet.

At this point, our host is ready to automatically assign IPs to the nested VMs, it can now also allow VMs to connect to the Internet with RRAS NATing the traffic.

Configure User-Defined Routes

The last step in the process is to configure UDRs in Azure to enable traffic to flow back and forth between VMs connected to the Azure vNet and nested VM’s in our Hyper-V host. We do so by telling Azure to send all traffic destined to our nested VMs, 10.2.2.0/24 in this example, to the LAN IP of our Hyper-V host where RRAS will route the traffic to the VMs via the internal switch created earlier.

#Create Route Table
$routeTableNested = New-AzRouteTable `
  -Name 'nestedroutetable' `
  -ResourceGroupName nestedvm-rg `
  -location EastUS

#Create route with nested VMs destination and Hyper-V host LAN IP as a next-hop
$routeTableNested  | Add-AzRouteConfig `
  -Name "nestedvm-route" `
  -AddressPrefix 10.2.2.0/24 `
  -NextHopType "VirtualAppliance" `
  -NextHopIpAddress 10.2.1.4 `
 | Set-AzRouteTable

#Associate the route table to the LAN subnet
 Get-AzVirtualNetwork -Name labvnet | Set-AzVirtualNetworkSubnetConfig `
 -Name 'lan' `
 -AddressPrefix 10.2.1.0/24 `
 -RouteTable $routeTableNested | `
Set-AzVirtualNetwork

After creating an additional Azure VM which we want to use to test connectivity from outside the host,  our final network topology is this:

 

Conclusion

We now have full connectivity to both the Internet and other VMs connected to the Azure vNet allowing the nested VMs to be reachable by other devices outside the Hyper-V host.

Refer to the video below for a full walk-through: 

Nested VMs Networking

Configuration Manager – How Updates install during a Maintenance Window.

0
0

This is a question I have had since I started with SCCM 2007. I thought I had a grasp of it until I was talking with a customer and started second guessing myself.

Why aren’t all my updates installing during the Maintenance Window?

Why do I have Servers in a Reboot Pending State after our scheduled Windows Update weekend?

I have a 3-hour Maintenance Window defined, that should be lots of time…

Customer Questions

I started doing some research to find a definitive answer to these questions and everything I could find referenced old blog posts that don’t exist anymore or some pretty unclear information, so I setup my lab and set down to get some concrete information…

I started with a pretty old Windows Server 2012 R2 image, so I know there are lots of updates to apply.

In the UpdatesDeployment.log we see that it is trying to install the update when the deadline hits, and since we aren’t in a Maintenance Window it will wait until it is in a Maintenance Window before attempting the install again.

 No current service window available to run updates assignment with time required = 600 UpdatesDeploymentAgent 11/29/2019 3:20:06 PM 3840 (0x0F00)
No service window available to run updates assignment UpdatesDeploymentAgent 11/29/2019 3:20:06 PM 3840 (0x0F00)
This assignment ({CDCB2B61-2743-4A16-A8B4-CA2949E85BF3}) will be retried once the service window is available. UpdatesDeploymentAgent 11/29/2019 3:20:06 PM 3840 (0x0F00)

I then deployed a Maintenance Window so when the Maintenance Window starts we see in the ServiceWindowManager.log that when each update attempts to install it will check to see if there is enough time remaining in the Maintenance Window to complete the install. This is based on the Max Run Time attribute of the software update.

If there is enough time remaining in the Maintenance Windows, you will see the following entries in ServiceWindowManager.log:

OnIsServiceWindowAvailable called with: Runtime:600, Type:4    ServiceWindowManager    11/29/2019 3:30:05 PM   560 (0x0230)
No Service Windows exist for this type. Will check if the program can run in the All Programs window… ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
Biggest Active Service Window has ID = {14D90B4F-4BB8-4070-85A0-806C2800AD5D} having Starttime=11/29/19 15:30:00 ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
Duration is 0 days, 01 hours, 00 mins, 00 secs ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
ActiveServiceWindow has 3595 seconds left ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
Program can run! Setting *canProgramRun to TRUE ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)

If there isn’t enough time remaining in the Maintenance Windows, you will see the following entries in ServiceWindowManager.log:

OnIsServiceWindowAvailable called with: Runtime:3600, Type:4    ServiceWindowManager    11/29/2019 3:31:13 PM   2764 (0x0ACC)
No Service Windows exist for this type. Will check if the program can run in the All Programs window… ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
Biggest Active Service Window has ID = {14D90B4F-4BB8-4070-85A0-806C2800AD5D} having Starttime=11/29/19 15:30:00 ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
Duration is 0 days, 01 hours, 00 mins, 00 secs ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
FindBiggestMergedTimeWindow called with TimeStart=11/29/19 15:31:13 and TimeEnd=11/29/19 16:30:00 ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
Biggest Chainable Service Window for Type=1 not found ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)

Program cannot Run! Setting *canProgramRun to FALSE ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
WillProgramRun called with: Runtime:3600, Type:4 ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
No Service Windows of this type exist. ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
There exists an All Programs window for this duration. The Program will run eventually. ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)

As well as the following entries in UpdateHandler.log:

No current service window available with time required = 3600    UpdatesHandler  11/29/2019 3:32:56 PM   2764 (0x0ACC)
Not enough service window available to run update (03a8098b-7740-40da-9082-00ea285035be) UpdatesHandler 11/29/2019 3:32:56 PM 2764 (0x0ACC)

Once everything that can be installed during the Maintenance Window is installed, it will attempt to reboot the machine. This is where the next thing can interfere. Computer Restart settings, specifically “Display a temporary notification to the user that indicates the interval before the user is logged off or the computer restarts (minutes)”. For a workstation, this setting makes sense, but for a server this could cause the machine to overshoot its maintenance window.

Assuming someone is logged onto the server, and you have this set to the default, which is 90 minutes (5400 seconds). Once you are within 90 minutes of the end of your maintenance window the machine will not reboot automatically, and you will see the following in the ServiceWindowManager.log

OnIsServiceWindowAvailable called with: Runtime:5400, Type:4    ServiceWindowManager    11/29/2019 4:16:04 PM   4072 (0x0FE8)
No Service Windows exist for this type. Will check if the program can run in the All Programs window… ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
Biggest Active Service Window has ID = {14D90B4F-4BB8-4070-85A0-806C2800AD5D} having Starttime=11/29/2019 3:30:00 PM ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
Duration is 0 days, 01 hours, 00 mins, 00 secs ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
FindBiggestMergedTimeWindow called with TimeStart=11/29/2019 4:16:04 PM and TimeEnd=11/29/2019 4:30:00 PM ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
Biggest Chainable Service Window for Type=1 not found ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)

Program cannot Run! Setting *canProgramRun to FALSE ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)

When you logon to the server you will see the “Recently installed software requires a computer restart” message, along with the Task Bar Icon.

The computer will automatically reboot during the next Maintenance Window – this is usually is too late, and you are attempting to install more updates.

Now, to answer the customers questions:

Why aren’t all my updates installing during the Maintenance Window? – If the Max Run Time is set to 120 minutes (2 hours) once you are within 120 minutes of the end of the maintenance window, we no longer have enough time to install those updates.

Why do I have Servers in a Reboot Pending State after our scheduled Windows Update weekend? – If someone is logged onto the Server (even in a disconnected state), your maintenance window is effectively reduced by the time specified in the Computer Restart setting “Display a temporary notification to the user that indicates the interval before the user is logged off or the computer restarts (minutes)”. So for your server infrastructure you make want to reduce this down to 2 minutes with the “Display a dialog box that the user cannot close, which displays the countdown interval before the user is logged off or the computer restarts (minutes)” set to 1 minute.

I have a 3-hour Maintenance Window defined, that should be lots of time… – Well that does depend on what the Max Run Time is for all deployments, along with what the Reboot Settings are if someone is logged on.

I hope I have imparted information regarding how updates and Maintenance Windows interact. I know I learned a lot doing this.

Setup Hybrid Azure AD Join – Part 1

0
0

In addition to users, device identities can be managed by Azure Active Directory as well, event if they are already managed by your on-premise network. This two part series will walk you throught the step to allow your devices to be both on-premise and Azure active directory joined, otherwise known as hybrid Azure ad join. Part 1 and 2 are listed below. This post will step you through configuring pass-through authentication.

  1. Configure Pass-through authentication
  2. Setup Hybrid Azure AD Join

Configure Pass-Through Authentication

Pass-through authentication (PTA) allow users to use the same password to connect with their organizations network and Azure cloud applications. For more info on PTA click here

Prerequisites

  • Install the latest version of AD Connect (1.4.38.0)
  • Install AD Connect on Windows Server 2012 R2 or later
  • Authentication Agents need access to
    • login.windows.net
    • login.microsoftonline.com
  • Whitelist connections to:
    • *.msappproxy.net
    • *.servicebus.windows.net

Steps to configure pass-through authentication

After installing AD Connect, the configuration screen will open, click Customize.

Accept the defaults on this page and click Install. SQL express will be install which support 100,000 users. Install SQL 2016 or higher to support more than 100,000 users.

Select Pass-Through Authentication

Use your Azure AD global administrator credential to login. Enter your username and password.

Select the first option to create a new AD account. This will require your on-premise enterprise admin account. This account will be used for periodic synchronization.

Click Add Directory for synchronization

The UPN domains present in your organization AD which have been verified in Azure AD. You can also use this page to configure the attribute to use for the userPrincipalName.

Select the OU’s that you would like to synchronize.

Select how users should be identified in your on-premises directories. You can leave the defaults.

Select which users and devices to synchronize.

Select optional features if desired.

On the ready to configure page, select start the synchronization process when configuration completes.

A successful configuration page.

This process will install the first authentication agent. To validate the process, login to Azure and confirm that the Sync Status is “enabled” and that pass-through authentication is “enabled”.


Setup Hybrid Azure AD Join – Part 2

0
0

Welcome back to the second and last post to setup hybrid Azure ad join. Hopefully all went well with configuring Pass-Through Authentication. Below you will find a link back to part 1.

  1. Configure Pass-Through Authentication
  2. Setup Hybrid Azure AD Join

Setup Hybrid Azure AD Join

Consider the following prerequisites before moving forward.

Prerequisites

Steps to configue hybrid Azure AD join

Because we ran AD Connect in part 1 to connect active directory to Azure AD, the initial options at first run will not be available. When AD Connect opens, click on Customize

Select “Configure device options” – This option is used to configure device registration for Hybrid Azure AD Join.

On the overview page, click Next

Connect to Azure AD by using a user with global administrator rights.

On the device options page, select “Configure Hybrid Azure AD join” then Next

Supported devices

  • Windows 10
  • Windows Server 2016
  • Windows Server 2019

Downlevel devices

  • Windows 8.1
  • Windows 7
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2008 R2

Select the appropriate option on the device operating system page based on the devices that you have in your organization

On the SCP configuration page, do the following

  • First check the box under Forest
  • Under Authentication Service – click on the drop down and select Azure Active Directory. If a federation service have been configured, select that option.
  • Click ADD to supply the enterprise admin account for the on-premises forest.

On the ready to configure page, click Configure

Confirm device registration.

Use the Get-MsolDevice cmdlet in the Msonline module to verify the device registration state in your Azure tenant. Before you begin you will need the deviceId of a computer that should be registered in Azure AD. Find the computer in your on-premise Active Directory, right click on the computer > properties > Attribute Editor > scroll down to objectGUID and use that number as the deviceId. OPen PowerShell ISE and the run the code below.

Install-module MSonline -force
import-module msonline
$msolcred = Get-credential
Connect-MsolService -Credential $Msolcred -AzureEnvironment AzureCloud
Get-msoldevice -deviceId 7q52824c-30k1-8d1c-a947-ab34643ffddc

From the results above confirm the following.

  • An object with the device id that matches the ObjectGUID on the on-premise computer must exist.
  • The value for DeviceTrustType must be Domain Joined. This is equivalent to the Hybrid Azure AD joined state on the Devices page in the Azure AD portal.
  • The value for Enabled must be True and DeviceTrustLevel must be Managed for devices that are used in conditional access.

Troubleshoot Hybrid Azure AD join:
If you are experiencing issues with completing hybrid Azure AD join for domain joined Windows devices, see:

Cleaning Up the Mess in Your Group Policy (GPO) Environment

0
0

Intro

Group Policy is a great way to enforce policies and set preferences for any user or computer in your organization.
However, anyone who managed Group Policy knows it might become very messy after some time, especially if there are a lot of administrators who manage the Group Policy Objects (GPOs) in the company.

In this blog post series, we will cover some useful scripts and methods which will help you to organize and maintain your GPOs, and clean up the mess surrounded in your Group Policy environment.

First Things First – Create a backup

Before removing and modifying any Group Policy Object, It is highly recommended to create a backup of the current state of your Group Policy Objects.
This can be done using the Group Policy Management Console MMC, or by using the PowerShell cmdlet “Backup-GPO”.
To back up all GPOs, run the following PowerShell command:

Backup-GPO -All -Path "C:\Backup\GPO"

You can also create a scheduled task to back up Group Policy on a daily/weekly basis.
Use the following script to automatically create the backup schedule task for you:

Function Create-GPScheduleBackup
{
    $Message = "Please enter the credentials of the user which will run the schedule task"; 
    $Credential = $Host.UI.PromptForCredential("Please enter username and password",$Message,"$env:userdomain\$env:username",$env:userdomain)
    $SchTaskUsername = $credential.UserName
    $SchTaskPassword = $credential.GetNetworkCredential().Password
    $SchTaskScriptCode = '$Date = Get-Date -Format "yyyy-MM-dd_hh-mm"
    $BackupDir = "C:\Backup\GPO\$Date"
    $BackupRootDir = "C:\Backup\GPO"
    if (-Not (Test-Path -Path $BackupDir)) {
        New-Item -ItemType Directory -Path $BackupDir
    }
    $ErrorActionPreference = "SilentlyContinue" 
    Get-ChildItem $BackupRootDir | Where-Object {$_.CreationTime -le (Get-Date).AddMonths(-3)} | Foreach-Object { Remove-Item $_.FullName -Recurse -Force}
    Backup-GPO -All -Path $BackupDir'
    $SchTaskScriptFolder = "C:\Scripts\GPO"
    $SchTaskScriptPath = "C:\Scripts\GPO\GPOBackup.ps1"
    if (-Not (Test-Path -Path $SchTaskScriptFolder)) {
        New-Item -ItemType Directory -Path $SchTaskScriptFolder
    }
    if (-Not (Test-Path -Path $SchTaskScriptPath)) {
        New-Item -ItemType File -Path $SchTaskScriptPath
    }
    $SchTaskScriptCode | Out-File $SchTaskScriptPath
    $SchTaskAction = New-ScheduledTaskAction -Execute 'PowerShell.exe' -Argument "-ExecutionPolicy Bypass $SchTaskScriptPath"
    $Frequency = "Daily","Weekly"
    $SelectedFrequnecy = $Frequency | Out-GridView -OutputMode Single -Title "Please select the required frequency"
    Switch ($SelectedFrequnecy) {
        Daily {
            $SchTaskTrigger =  New-ScheduledTaskTrigger -Daily -At 1am
        }
        Weekly {
            $Days = "Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"
            $SelectedDays = $Days | Out-GridView -OutputMode Multiple -Title "Please select the relevant days in which the schedule task will run"
            $SchTaskTrigger =  New-ScheduledTaskTrigger -Weekly -DaysOfWeek $SelectedDays -At 1am
        }
    }  
    Try {
        Register-ScheduledTask -Action $SchTaskAction -Trigger $SchTaskTrigger -TaskName "Group Policy Schedule Backup" -Description "Group Policy $SelectedFrequnecy Backup" -User $SchTaskUsername -Password $SchTaskPassword -RunLevel Highest -ErrorAction Stop
    }
    Catch {
        $ErrorMessage = $_.Exception.Message
        Write-Host "Schedule Task regisration was failed due to the following error: $ErrorMessage" -f Red
    }
}

Step 2 – Get Rid of Useless GPOs

There are probably a lot of useless GPOs in your Group Policy environment.
By useless, I mean Group Policies that are empty, disabled or not linked to any Organizational Unit (OU).

Each of the PowerShell functions below will create a report (Gird-View) with the affected GPOs (Disabled, Empty and Not-Linked), and remove those GPOs if requested by the user.

Please pay attention that all scripts are using ‘ReadOnlyMode’ parameter, which is set to ‘True’ by default to prevent any unwelcome changes and modifications on your environment.

Remove Disabled GPOs

Disabled GPOs are Group Policies which configured with GPO Status: “All Settings Disabled”, making it completely meaningless to computers and users policy. The following PowerShell script will identify those ‘Disabled’ Group Policies and provide you with the option to delete selected objects from your environment.

Function Get-GPDisabledGPOs ($ReadOnlyMode = $True) {
    ""
    "Looking for disabled GPOs..."
    $DisabledGPOs = @()
    Get-GPO -All | ForEach-Object {
        if ($_.GpoStatus -eq "AllSettingsDisabled") {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is configured with 'All Settings Disabled'"
            $DisabledGPOs += $_
        }
        Else {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Green -NoNewline; Write-Host " is enabled"         
        }
    }
    Write-Host "Total GPOs with 'All Settings Disabled': $($DisabledGPOs.Count)" -f Yellow
    $GPOsToRemove = $DisabledGPOs | Select Id,DisplayName,ModificationTime,GpoStatus | Out-GridView -Title "Showing disabled Group Policies. Select GPOs you would like to delete" -OutputMode Multiple
    if ($ReadOnlyMode -eq $False -and $GPOsToRemove) {
        $GPOsToRemove | ForEach-Object {Remove-GPO -Guid $_.Id -Verbose}
    }
    if ($ReadOnlyMode -eq $True -and $GPOsToRemove) {
       Write-Host "Read-Only mode in enabled. Change 'ReadOnlyMode' parameter to 'False' in order to allow the script make changes" -ForegroundColor Red 
    }
}


Remove Unlinked GPOs

Group Policies can be linked to an AD Site, to a specific OU or the domain level.
Unlinked GPOs are Group Policies that are not linked to any of the above, and therefore have zero effect on computers and users on the domain. The following PowerShell script will identify those ‘Unlinked’ Group Policies and provide you with the option to delete selected objects from your environment.

Function Get-GPUnlinkedGPOs ($ReadOnlyMode = $True) { 
    ""
    "Looking for unlinked GPOs..."
    $UnlinkedGPOs = @()
    Get-GPO -All | ForEach-Object {
        If ($_ |Get-GPOReport -ReportType XML | Select-String -NotMatch "<LinksTo>" ) {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is not linked to any object (OU/Site/Domain)"
            $UnlinkedGPOs += $_
        }
        Else {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Green -NoNewline; Write-Host " is linked"         
        }
    }
    Write-Host "Total of unlinked GPOs: $($UnlinkedGPOs.Count)" -f Yellow
    $GPOsToRemove = $UnlinkedGPOs | Select Id,DisplayName,ModificationTime | Out-GridView -Title "Showing unlinked Group Policies. Select GPOs you would like to delete" -OutputMode Multiple
    if ($ReadOnlyMode -eq $False -and $GPOsToRemove) {
        $GPOsToRemove | ForEach-Object {Remove-GPO -Guid $_.Id -Verbose}
    }
    if ($ReadOnlyMode -eq $True -and $GPOsToRemove) {
       Write-Host "Read-Only mode in enabled. Change 'ReadOnlyMode' parameter to 'False' in order to allow the script make changes" -ForegroundColor Red 
    }
}


Remove Empty GPOs

Empty GPO is a Group Policy Object which does not contain any settings.
An empty Group Policy can be identified using the User/Computer version of the GPO (when they are both equal to ‘0’), or when the Group Policy Report extension data is NULL.

The following PowerShell script will identify ‘Empty’ Group Policies using the methods described above, and provide you with the option to delete selected objects from your environment.

Function Get-GPEmptyGPOs ($ReadOnlyMode = $True) {
    ""
    "Looking for empty GPOs..."
    $EmptyGPOs = @()
    Get-GPO -All | ForEach-Object {
        $IsEmpty = $False
        If ($_.User.DSVersion -eq 0 -and $_.Computer.DSVersion -eq 0) {
            Write-Host "The Group Policy " -nonewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is empty (no settings configured - User and Computer versions are both '0')"
            $EmptyGPOs += $_
            $IsEmpty = $True
        }
        Else {
            [xml]$Report = $_ | Get-GPOReport -ReportType Xml
            If ($Report.GPO.Computer.ExtensionData -eq $NULL -and $Report.GPO.User.ExtensionData -eq $NULL) {
                Write-Host "The Group Policy " -nonewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is empty (no settings configured - No data exist)"
                $EmptyGPOs += $_
                $IsEmpty = $True
            }
        }
        If (-Not $IsEmpty) {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Green -NoNewline; Write-Host " is not empty (contains data)"        
        }
    }
    Write-Host "Total of empty GPOs: $($EmptyGPOs.Count)" -f Yellow
    $GPOsToRemove = $EmptyGPOs | Select Id,DisplayName,ModificationTime | Out-GridView -Title "Showing empty Group Policies. Select GPOs you would like to delete" -OutputMode Multiple
    if ($ReadOnlyMode -eq $False -and $GPOsToRemove) {
        $GPOsToRemove | ForEach-Object {Remove-GPO -Guid $_.Id -Verbose}
    }
    if ($ReadOnlyMode -eq $True -and $GPOsToRemove) {
       Write-Host "Read-Only mode in enabled. Change 'ReadOnlyMode' parameter to 'False' in order to allow the script make changes" -ForegroundColor Red 
    }
}

In the next chapter, we will continue to review advanced methods and different ways of cleaning up Group Policy form unwanted GPOs. Stay tuned!





Latest Images