Quantcast
Channel: Secure Infrastructure Blog
Viewing all 196 articles
Browse latest View live

Most Common Mistakes in Active Directory and Domain Services – Part 1

$
0
0

As a Premier Field Engineer (PFE) at Microsoft, I encounter new challenges on a daily basis. Every customer has its own uniqueness, and each environment is different from the other.

And yet, there are several things I repeatedly encounter over and over again. Common mistakes that IT administrators make because lack of knowledge or changes in products they are not aware of.

This blog post is the first part of a series which will cover several of those mistakes. So… Let’s get started!

Mistake #1: Configuring Multiple Password Policies for Domain Users Using Group Policy

When reviewing Group Policy settings, I often find Group Policies Objects (GPOs) that contain ‘Password Policy’ settings.

For example, when looking into a “Servers Policy” GPO, I can see that it has Password Policy settings defined, including Maximum password age, Minimum password length and so on.

When I ask the customer about it, he tells me that this policy was built to set a different password policy for some admins accounts or any other group of users.

As you already know (or might have guessed), this is NOT the correct way to configure different Password Policies in your environment. Here’s why:

  • Password Policy settings in GPO affect computers, not users.
  • When you change your Domain User password, the password change takes place on the Domain Controllers.
  • Therefore, the Password Policy that takes effect is the one applied on your Domain Controllers, usually by the ‘Default Domain Policy” GPO.
  • More accurate, the Domain Controller that holds the PDC Emulator FSMO role is the one responsible for applying the Password Policy for the domain level.
  • In terms of Group Policy, there can be only one password policy for domain users.

Bottom Line: Configure a GPO with password policy and link it to an Organizational Unit (OU) won’t change the password policy for users within this OU.

Do It Right: Use FGPP.

Mistake #2: Removing “Authenticated Users” from the Group Policy Object Security Filtering

In June 2016, Microsoft released a security update that changes the security context with which user group policies are retrieved.

Before that update, user group policies were retrieved by using the user’s security context. After installing the update, user group policies are retrieved by using the computer's security context.

Therefore, you should always make sure that any Group Policy in your environment could be retrieved by the relevant computer accounts.

Because a lot of people are not aware of this change, I usually find Group Policies with missing permissions that are not being applied at all.

When changing Group Policy Security Filtering scope from “Authenticated Users” to any other group, the “Authenticated Users” (which contains computers account as well) are removed from the Group Policy delegation tab. As a result, computer accounts don’t have the necessary “Read” permissions in order to access and retrieve group policies.

In recent versions of Group Policy Management, a warning message appears when removing the default “Authenticated Users” from the “Security Filtering” tab:

That is why you must validate that any Group Policy has the “Authenticated Users” or “Domain Computers” groups with “Read” permissions. Make sure that that you specify “Read” permission only, without selecting the “Apply group policy” permissions (otherwise any user or computer will apply this Group Policy).

The following PowerShell function can help you identify GPOs with missing permissions (missing both 'Authenticated Users' and ‘Domain Computers' groups):

Bottom Line: Group Policies with missing permissions for computers account (“Authenticated Users”, “Domain Computers” or any other group that includes the relevant computers) will NOT be applied.

Do It Right: When changing Group Policy Security Filtering, make sure you add the “Authenticated Users” group in the delegation tab and provide it with “Read” permission only.

Mistake #3: Creating a DNS Conditional Forwarder as a Non-Active Directory Integrated Zones

When creating a DNS conditional forwarder using the DNS management console (GUI), it’s created, by default, as a non-Active Directory integrated zone, meaning that it’s saved locally in the server’s registry.

Creating a non-Active Directory integrated zone raises a few problems:

  • Non-Active Directory zones do NOT replicate between the Active Directory Integrated DNS servers, therefore these zones might become out of sync when configured over two or more DNS servers.
  • Non-Active Directory zones can be easily forgotten and abandoned when replacing Domain Controllers as part of an upgrade or restore procedures.
  • In many cases, Non-Active Directory zones for conditional forwarder are defined on a single server, which causes inconsistent behavior between servers in terms of DNS resolving.

You can easily change this and create the zone as an Active Directory integrated zone by selecting the option “Store this conditional forwarder in Active Directory”.

Using PowerShell, you can specify the parameter ‘ReplicationScope’ with either ‘Forest’ or ‘Domain’ scope to store the conditional forwarder zone in Active Directory:

Bottom Line: Avoid using non-Active Directory integrated zones unless you have a really good reason.

Do It Right: When creating conditional forwarder using either PowerShell or the GUI, make sure to create it as an Active Directory-integrated forwarder.

 

Continue reading part 2 of the series.


Most Common Mistakes in Active Directory and Domain Services – Part 2

$
0
0

In this blog post, we will continue to explore some of the most common mistakes in Active Directory and Domain Services.
Part 1 of the series covered the first three mistakes, and today we'll go over another three interesting issues. Enjoy your reading 🙂

Mistake #4: Keeping the Forest and Domain Functional Levels at a Lower Version

For various reasons, customers are afraid of dealing with the Forest and Domain Functional Levels (FFL and DFL in short).

Because the FFL and DFL purpose and impact are not always clear, people avoid changing it and sometimes maintain a very old functional level like Windows Server 2008 or even Windows Server 2003.

The Forest and Domain Functional Levels reflect the lowest Domain Controller version within the forest and the domain.
In other words, this attribute is telling the Domain Controllers that all DCs in the Domain or Forest are running an OS equal to or higher than the functional level. For example, a functional level of Windows Server 2012R2 means that all DCs are running a Windows Server 2012R2 OS and above.

The functional level is used by the Active Directory to understand whether it’s possible to take advantage of new features that require the Domain Controllers to be at a minimum OS version.
The FFL and DFL are also used to prevent promoting an old Domain Controller version in the domain, as it might, theoretically, affect the usability of new AD features being used by newer OS versions.

An old Forest/Domain Functional Levels may prevent you from using some very useful Active Directory features like Active Directory Recycle Bin, Domain-Based DFS namespaces, DFS Replication for SYSVOL and Fine-Grained Password Policies.
In this link, you can find the full list of Active Directory Features in each functional level.

It's also worth mention that you can roll back the FFL and DFL all the way down to Windows Server 2008R2 using the Set-ADForestMode and Set-ADDomainMode PowerShell cmdlets. See the example below:

Bottom Line: Forest and Domain Functional Levels are used internally by the Domain Controllers and don’t affect which operating systems can be used by clients (workstation and servers).
Older functionally and features are still supported in newer functional levels, so you shouldn’t notice any differences, and everything is expected to continue to work as before.
If (for some reason) you still have concerns about certain applications, contact the vendor for clarification.

Do It Right: Backup your AD environment (using Windows Server Backup or any other solution you've got), upgrade the FFL and the DFL in your test environment and then in production.

Mistake #5: Use DNS as an Archive by Disabling DNS Scavenging

DNS is one of the most important services in each environment. It should be running smoothly and be up to date so it can resolve names to IP address correctly with no issues.

Yet, there are some cases when customers think about DNS as an archive for old and unused servers’ names and IP addresses. In those cases, administrators disable the DNS Scavenging option to prevent old DNS records from being deleted. This is a bad habit because it could easily lead to a messy DNS with duplicated and irrelevant records, where A Records point to IP addresses which do not exist anymore, and PTRs refer to old computers deleted long time ago.

For those of you who don’t know, DNS Scavenging is a DNS service responsible for cleaning-up old and unused DNS records which are not relevant anymore, based on their timestamp.
When DNS record is being updated or refreshed by a DNS client, its timestamp gets updated with the current date and time.
DNS Scavenging designed to delete records that their timestamp is older than the ‘Refresh’ + ‘No Refresh’ intervals (which are configured in the DNS zone settings). Pay attention that static DNS records are not being scavenged at all.

If DNS Scavenging is disabled in your environment for a while, I suggest running the PowerShell script below before enabling it in order to better understand which records are going to be removed as part of the scavenging process.
The script checks any Dynamic DNS Record and decided whether it’s:
• A stale record which responded to ping.
• A stale record which doesn’t respond to ping.
• An updated record (not stale).

The script’s output should look like this:

Bottom Line: DNS Scavenging is NOT the place to save all your ancient names and IP addresses. If you required to save this information, use some CMDB tool or any other platform design for this. DNS is an operational service that should response fast and reliable with the correct and relevant values only.

Do It Right: Enable DNS scavenging and get rid of those old and unused records.

Mistake #6: Using a DHCP Failover Without Configuring DDNS Update Credentials

DHCP Failover is a well-known feature that was released back in September 2012 with Windows Server 2012. The DHCP Failover provides high availability mechanism by entering two DHCP servers into a failover relationship.

When the option "Always dynamically update DNS records" in the DHCP properties is selected, the DHCP server updates the DNS with A and PTR records of DHCP clients using its own computer credentials (e.g. ‘DHCP01’ computer object).

When a DHCP Failover is configured, this can become an issue:
When the first DHCP server (e.g. DHCP01) in a DHCP Failover is registering a DNS record, it becomes its owner and gets the relevant permissions to update the record when needed.
If the second DHCP server (e.g. DHCP02) in a DHCP Failover will try to update the same record (because DHCP01 is unavailable for the moment), the update will fail because it doesn’t have the required permissions to update the record.
Pay attention that if your DNS zones are configured with "Nonsecure and secure" dynamic updates (which standing aginst the best practices), security permissions on DNS records are not enforced by any mean, and records can be updated by any client, including your DHCP servers.

To resolve this, you can configure DDNS update credentials and enter the username and password of a dedicated user account you created for this purpose (e.g. SrvcDHCP).
In general, no special permissions are required.
The DHCP servers will always use this credential when registering and updating DNS records.

Before changing the DNS dynamic update credentials, you may consider changing the ownership and the permissions of existing DNS records to include the new user account, especially if your DHCP environment is running for a long time.

In order to complete this, you can use the PowerShell script below.
The script examines each DNS record and displays a table with records that meet all of the following conditions:

  1. The DNS record is a dynamic record.
  2. Record’s current owner is a DHCP server.
  3. Record’s type is A or PTR.

If approved by the user, the script updates the selected records with the new owner and add the user account to the records ACL with a ‘full control’ permission.

Bottom Line: Using a DHCP Failover without configuring DNS dynamic update credentials will result in DNS update failures when one DHCP server will try to update records that were registered by the second DHCP server.

Do It Right: If you are using DHCP Failover, you should configure DNS dynamic updates credentials on both DHCP servers.

 

In the next (and last) blog post we’ll talk about a few more issues and warp up this series.

Understanding and using the Pending Restart Feature in SCCM Current Branch

$
0
0

I get queried a lot on new features in System Center Configuration Management and how they can be used to simplify life for customers, on a daily basis.

Microsoft's mission is to empower every person and every organization on the planet to achieve more, so how can ConfigMgr help with that?

 

With the release of ConfigMgr 1710 a new feature was added called “Pending Restart”

This has allowed Administrators to quickly, out of the console identify what machines need a restart and what is the reason for requiring a restart.

 

This blog post is going to guide you through how we can use the WQL and SQL Queries to create reports and collections to simplify the management and reporting to business, as well as using this to schedule your mass restarts to ensure that your devices remain compliant.

So… Let’s get started!

 

ClientState

 

Firstly we need to understand that when we are looking at the "Pending Restart" Tab in a Collection, that it uses the ClientState Information in the v_CombinedDeviceResources view, in the Database

 

Screen1

 

The ClientState information is what lets us know if there is a reboot pending.

There are five main states:

0 = No reboot Pending
1 = Configuration Manager
2 = File Rename
4 = Windows Update
8 = Add or Remove Feature

A computer may have more than one of the states applying at the same time, which will change the state number to a combination of the applicable states.

1 – Configuration Manager
2 – File Rename
3 – Configuration Manager, File Rename
4 – Windows Update
5 – Configuration Manager, Windows Update
6 – File Rename, Windows Update
7 – Configuration Manager, File Rename, Windows Update
8 – Add or Remove Feature
9 – Configuration Manager, Add or Remove Feature
10 – File Rename, Add or Remove Feature
11 – Configuration Manager, File Rename, Add or Remove Feature
12 – Windows Update, Add or Remove Feature
13 – Configuration Manager, Windows Update, Add or Remove Feature
14 – File Rename, Windows Update, Add or Remove Feature
15 – Configuration Manager, File Rename, Windows Update, Add or Remove Feature

 

By Querying the SCCM DB, we can see what state a machine is in.

 

Screen2

Note we are only looking here for machines that DO require a reboot.

So Far we have identified now that there are machines in our environment that do require restarts, and had a look at the different states that a machine can report on.

Restarting Machines

So how do I go about Restarting a machine?

There are 2 main ways :

1. Straight out of the console

Screen3

The first and easiest way to handle a small number of machines, is by selecting 1(or more machines) – Right click – Client Notification – Restart

This will cause a Popup Notification on the User Machines to appear.

The User will have 2 Options, Restart or Hide

 

2. Create a Collection that will list all the machines that require a restart.

This is the option when machines need to be Targeted for a restart en masse

Whether it is users that do not restart machines, or a restart required for applying a Out of Band Update, this is a quick way to group machines together, and schedule a Restart Task Sequence

 

Screen4

 

SQL Query for Collection

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System join sms_combineddeviceresources on sms_combineddeviceresources.resourceid = sms_r_system.resourceid where sms_combineddeviceresources.clientstate != 0

 

SQL Query for Report

 

We now have a Collection of all the machines that require restarts.

I still need to be able to report to business WHERE those machines are, WHO is using them, WHAT Operating System are they using?

 

This is where you can create a report very easily using the query below.

This will list the Machine Names, Operating Systems, State, State Meaning, Last Logged On User, Last Active time for the machine

 

SELECT        Name AS [Pending restart Clients], ADSiteName, ClientState,
          (SELECT CASE [ClientState] 
WHEN '1' THEN 'Configuration Manager' WHEN '2' THEN 'File Rename' WHEN '3' THEN 'Configuration Manager, File Rename' WHEN '4' THEN 'Windows Update'
WHEN '5' THEN 'Configuration Manager, Windows Update' WHEN '6' THEN 'File Rename, Windows Update' WHEN '7' THEN 'Configuration Manager, File Rename, Windows Update' 
WHEN '8' THEN 'Add or Remove Feature' WHEN '9' THEN 'Configuration Manager, Add or Remove Feature' WHEN '10' THEN 'File Rename, Add or Remove Feature'
WHEN '11' THEN 'Configuration Manager, File Rename, Add or Remove Feature' WHEN '12' THEN 'Windows Update, Add or Remove Feature' 
WHEN '13' THEN 'Configuration Manager, Windows Update, Add or Remove Feature' WHEN '14' THEN 'File Rename, Windows Update, Add or Remove Feature'
WHEN '15' THEN 'Configuration Manager, File Rename, Windows Update, Add or Remove Feature' ELSE 'Unknown' END AS Expr1) AS [Client State Detail],
          (SELECT CASE WHEN DeviceOS LIKE '%Workstation 5.0%' THEN 'Microsoft Windows 2000' WHEN DeviceOS LIKE '%Workstation 5.1%' THEN 'Microsoft Windows XP' 
WHEN DeviceOS LIKE '%Workstation 5.2%' THEN 'Microsoft Windows XP 64bit' WHEN DeviceOS LIKE '%Server 5.2%' THEN 'Microsoft Server Windows Server 2003' 
WHEN DeviceOS LIKE '%Workstation 6.0%' THEN 'Microsoft Windows Vista' WHEN DeviceOS LIKE '%Server 6.0%' THEN 'Microsoft Server Windows Server 2008 R2'
WHEN DeviceOS LIKE '%Server 6.1%' THEN 'Microsoft Server Windows Server 2008' WHEN DeviceOS LIKE '%Workstation 6.1%' THEN 'Microsoft Windows 7' 
WHEN DeviceOS LIKE '%server 6.3%' THEN 'Microsoft Server Windows Server 2012 R2' WHEN DeviceOS LIKE '%server 6.2%' THEN 'Microsoft Server Windows Server 2012'
WHEN DeviceOS LIKE '%Workstation 6.2%' THEN 'Microsoft Windows 8' WHEN DeviceOS LIKE '%Workstation 6.3%' THEN 'Microsoft Windows 8.1' 
WHEN DeviceOS LIKE '%Workstation 10%' THEN 'Microsoft Windows 10' WHEN DeviceOS LIKE '%server 10%' THEN 'Microsoft Windows Server 2016'
ELSE 'N/A' END AS Expr1) AS [Operating System], LastLogonUser, LastActiveTime
FROM     dbo.vSMS_CombinedDeviceResources
WHERE    (ClientState > 0) AND (ClientActiveStatus = 1)

In Conclusion

The Introduction of the ClientState reporting into the Console has allowed us as Administrators, to get a view of what machines need a reboot, and why.

I hope that the Queries will help to guide you, and to help simplify your Daily Administration.

Configuration Manager Advanced Dashboards – Rich view of your Configuration Manager environment

$
0
0

 

Introduction

 

As a Premier Field Engineer (PFE) at Microsoft, I get asked by a lot of customers about custom dashboards and reports that are available or can be created for monitoring the SCCM environment , checking the status on client activity, client health, deployments or content status to provide to support teams, SCCM administrators and managers.

So yes there are tons of native built in reports to get that data but putting it all together to get an overall view of the environment is the challenge.…

Solution

 

The Configuration Manager Advanced Dashboards (CMAD) have been created within Microsoft by a few PFE’s along with myself who form part of the development team with Lead Stephane Serero@StephSlol.

The Configuration Manager Advanced Dashboards (CMAD) are designed to offer:

  • At a glance a view of the Configuration Manager environment
  • Immediately pinpoint specific issues
  • Monitor the undergoing activities

The CMAD solution (Configuration Manager Advanced Dashboards) delivers a data-driven reporting overview of the System Center Configuration Manager environment.

This solution consists of a rich set of dashboards designed to deliver real-time reporting of ongoing activity in your Configuration Manager environment.

Native Configuration Manager Reports are not replaced with this solution, the CMAD solution amplifies the data they show by providing additional data insights.

The dashboards in this solution were created based on field experience and on customers’ needs to provide an overall view of various Configuration Manager functionality. The embedded charts and graphics provide details across the entire infrastructure.

 

Dashboard – Software Updates

image

Dashboard – ConfigMgr Servers Health

image

Dashboard – Client Health Statistics

image

Dashboard – Security Audit

clip_image002[7]

Key Features and Benefits

The CMAD solution consists of 180+ dashboards/reports covering the following Configuration Manager topics:

  • Asset Inventory
  • Software Update Management
  • Application Deployment
  • Compliance Settings
  • Infrastructure Monitoring:
  • Site Replica
  • Content replication
  • Software Distribution
  • Clients Health
  • Servers Health
  • SCEP Technical Highlights

The CMAD is supported on Configuration Manager 2012 and later releases (including Current Branch versions). The CMAD is supported on Reporting Services 2008 R2 and later releases.

 

Some might ask – but SSRS is so last yearNyah-Nyah

That's why the team has also created a PowerBI versionSmile which comes with the offering “System Center Configuration Manager and Intune PowerBI Dashboard Integration”.

So now you can harness all the capabilities of PowerBI to enhance the reporting experience.

clip_image002

 

Conclusion

 

The Introduction of this solution has allowed SCCM Administrators to get a better view of the state of there SCCM environments.

So you ask how do we get these Dashboards??Sarcastic smile

If you are a Microsoft Premier customer you can reach out to your TAMs for delivery questions!!

Field Notes: The case of a crashing Hyper-V VM – Integration Services Out of Date

$
0
0

Background

I recently had an opportunity to offer assistance on a case relating to stop errors (blues screens) experienced in a Virtual Machine (VM) running on a Hyper-V Failover Cluster.  I was advised that two attempts to increase memory on the VM did not provide positive results (I’ll explain later on why the amount of memory assigned to the VM was suspect).  The only thing I could initially get my hands on was a memory dump file, and I would like to take you through how one command in WinDbg can give you clues on what the cause of the issue was and how it was resolved.

Quick Memory Dump Analysis

So I started to take a look at the Kernel Memory Dump that was generated during the most recent crash using the Debugging Tools for Windows (WinDbg).  WinDbg can be downloaded at https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools.  I’m not a regular debugger but I immediately made interesting discoveries when I opened the dump file.

The following are noticeable from the image above:

  • User address space may not be available as this is a kernel dump
  • Symbols and other information that may be useful such as product build
  • Bugcheck analysis (in case of a crash) with some good guidance on next steps

Let us get the issue of assigned memory out of the way before we look at other data.  I used the !mem command from the MEX Debugging Extension for WinDbg (https://www.microsoft.com/en-us/download/details.aspx?id=53304) to dump memory information.  As it can be seen on the image below, available memory is definitely low, which explains the reason for increasing assigned memory (which was later dropped as it did not help in this case).

image

The !vm command provides similar output if you don’t use the MEX extension.

I ran !analyze –v to get detailed debugging information as WinDbg suggests.

image

The output above shows that this was a Bug Check 0x7A: KERNEL_DATA_INPAGE_ERROR (https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x7a--kernel-data-inpage-error).  More information can also be found in the WinDbg help file if you are unable to access the Internet.  Additional debug text states that the Windows Memory Manager detected corruption of a pagefile page while performing an in-page operation.  The data read from storage does not match the original data written.  This indicates the data was corrupted by the storage stack, or device hardware.  Just be careful since this is a VM and does not have direct access to hardware!

This explanation is inline with what I picked up in the stack:

image

How to determine the appropriate page file size for 64-bit versions of Windows provides a nice summary and guidance on paging files.

Let’s take a brief look at the !analyze window above (Bugcheck Analysis).  Here it can be seen that the BIOS date is 05/23/2012.  This is concerning as system BIOS should be kept up to date.  This also gave me a clue that we could be dealing with outdated Integration Services, which was the case.

Hyper-V Integration Services allow a virtual machine to communicate with the Hyper-V host.  Many of these services are conveniences, such as guest file copy, while others are important to the virtual machine's ability to function correctly.

 

What’s the cause of this unexpected behavior?

You’ve guessed it! Outdated Integration Services.   Here’s what happened:

  • The VM was configured with a startup RAM of 4 GB
  • Guest physical memory dropped when the VM did not need it (memory was reclaimed by Hyper-V)
  • An attempt by the VM to reclaim this RAM later when it was required failed as it (the VM) had difficulties communicating with the host through the Dynamic Memory Integration Service

 

Our Solution

Upgrading Integration Services resolved the issue.  After monitoring for some time, the VM was stable and there was no more memory pressure – it was able to reclaim memory as it needed it.  Here is an example of what it looked like in Process Explorer’s System Information View.

clip_image001

This document also states that Integration Services must be upgraded to the latest version and that the guest operating system (VM) must supports Dynamic Memory in order for this feature to function properly.

Summary

I demonstrated how one command in WinDbg (!analyze –v) can help you with some clues when dealing with system crashes.  In this case, it was outdated Integration Services (BIOS date was the clue).  I would also like to highlight the importance of monitoring.  There is a lot of information on the Internet on ensuring smooth and reliable operation of Hyper-V hosts and VMs.

If WinDbg and a memory dump was all you had, this would be one of the ways to go.  Grab a free copy and have it ready on your workstation if you don’t already have it installed : )

Till next time…

Understanding Volume Activation Services – Part 1 (KMS and MAK)

$
0
0

Windows Activation and KMS have been around for many years - and still - a lot of people don't understand the basics of Windows activation, what are the differences between KMS and MAK, and how to choose the best Volume Activation method that meets the organization’s needs.

In this blog post, we'll shed some light on these subjects and explain how to deploy and use Volume Activation services correctly.
This will be the first part in the series.
 

Series:

  • Part 1 - KMS and MAK
  • Part 2 - Active-Directory Based Activation
  • Part 3 - Office Considerations and Other Activation Methods

 

So... What is KMS?

KMS, like MAK, is an activation method for Microsoft products, including Windows and Office.
KMS stands for Key Management Service. The KMS server, called 'KMS host', is installed on a server in your local network. The KMS clients connect to the KMS host for activation of both Windows and Office.

Prerequisites

A KMS host running on Windows Server 2019/2016/2012R2 can activate all Windows versions, including Windows Server 2019 and Windows 10 all the way down to Windows Server 2008R2 and Windows 7. Semi-Annual and Long-Term Service Channel (LTSC) are both supported by the KMS.

Pay attention that Windows Server 2016 and Windows Server 2012R2 will require to install the following KB's in order activate the newest Windows 10 Enterprise LTSC and Windows Server 2019:

For Windows 2016:
1. KB4132216 - Servicing stack update, May 18 (Information, Download).
2. KB4467684 - November Cumulative Update or above (Information, Download).

For Windows 2012R2:
1. KB3173424 - Servicing stack update (Information, Download).
2. KB4471320 - December 2018 Monthly Rollup (Information, Download).

In order to activate clients, the KMS uses a KMS host key. This key can be obtained from the Microsoft VLSC (Volume Licensing Service Center) website. By installing that key, you are configuring the server to act as a KMS host.
Because KMS host key of newer Windows version can be used to activate older Windows versions, you should only obtain and install the latest KMS host key available in VLSC.
Also, note that KMS host key for Windows server can be used to activate Windows clients - so you can (and should) use one KMS host key to rule them all.

Now that you understand those facts, you know you should look for 'Windows Server 2019' version in VLSC and obtain the KMS host key for that version. Once again, this key will let you activate any Windows server and Windows client version in your environment.

Deploying the KMS host

After getting the KMS host key from VLSC, you'll need to install it. For that, we'll use the Volume Activation Tools feature, available on Windows Server 2012R2 and above.
You can install the Volume Activation Tools feature using the Server Manager (Remote Server Administration Tools -> Role Administration Tools -> Volume Activation Tools) or by using the following PowerShell command: Install-WindowsFeature RSAT-VA-Tools.

Run the tool right from Server Manager -> Tools or by typing 'vmw' in your PowerShell screen.
Volume Activation Tools lets you choose between Active Directory-Based Activation (will be covered in the second post) and Key Management Service (KMS). For now, we'll choose to the KMS activation method.

After selecting the activation method, you'll be asked to provide the KMS host key obtained from the VLSC.

Choose your preferred activation method (by phone or online using the internet) to activate the KMS host key for the selected product.

In the 'Configuration' step, pay attention to the following settings:

  1. Volume license activation interval (Hours) - determines how often the KMS client attempts activation before it is activated. The default is 2 hours.
  2. Volume license renewal interval (Days) - determines how often the KMS client attempts reactivation with KMS (after it has been activated). The default is 7 days.
    By default, Windows activates by the KMS host for 180 days. After 7 days, when there are 173 days left for the volume activation to be expired, the client attempts reactivation against the KMS host and gets a new 180 days activation period.
  3. KMS TCP listening port - By default, the KMS host is listening on port 1688 (TCP). You can change the port if needed using this setting.
  4. KMS firewall exceptions - Creating the relevant firewall exceptions for the Private/Domain/Public profiles.
  5. DNS Records - By selecting 'Publish', the Volume Activation Tools wizard creates the _vlmcs SRV record (e.g _vlmcs._tcp.contoso.com). Windows uses this SRV record to automatically find the KMS server address.

Reviewing KMS client settings

By now, you should be running a KMS host configured with KMS a host key for Windows Server 2019.

Any Windows client that configured to use 'KMS Client Channel' will be activated against the new KMS host automatically within 2 hours (as this is the 'KMS Activation Interval' default value).
The 'KMS Client Channel' determined by the product key used in the client. By default, computers that are running volume-licensed editions are KMS clients with no additional configuration needed.
In case you'll be required to convert a computer from a MAK or retail edition to a KMS client, you can override the currently installed product key and replace it with an applicable KMS client key that suitable for your Windows version. Pay attention that the selected key should exactly match the Windows version you're using, otherwise it won't work.
These KMS clients keys, also known as Generic Volume License Keys (GVLK), are public and can be found in the KMS Client Setup Keys page.

From the client perspective, you can use the slmgr.vbs script to manage and view the license configuration.
For a start, you can run 'slmgr.vbs /dli' to display the license information currently applied on the client.
You can see in the screenshot that a KMS client channel is being used.

If required, use 'slmgr.vbs /ipk PRODUCTKEY' (e.g slmgr.vbs /ipk WC2BQ-8NRM3-FDDYY-2BFGV-KHKQY) to replace the current product key with a new one (KMS client channel in this example).

In order to initiate an activation attempt, you can use the 'slmgr.vbs /ato', which will immediately try to activate Windows.
The KMS host will respond to the activation request with the count of how many computers have already contacted the KMS host for activation. Computers that receive a count below the activation threshold are not activated.
The activation thershold is different for Windows clients and servers:

  • Clients will activate if the count is 25 or higher
  • Servers will activate if the count is 5 or higher.

You can find the full list of slmgr.vbs command-line options right here.

When to use KMS

Compared to MAK, KMS should be your preferable activation method as long as you meet the activation threshold and the (very) basic requirements for deploying KMS (which are DNS and TCP/IP connectivity between the clients and the KMS host).
Saying that, we'll see in part 2 why Active Directory-Based Activation is actually even better than KMS for most scenarios.
 

What is MAK?

MAK (Multiple Activation Key) is another activation method for Microsoft products, including Windows and Office.
Unlike KMS, MAK activation is used for a one-time activation against Microsoft's hosted activation services.
This means that MAK does not require any server or services within your network - the activation request approved by Microsoft servers either online or by phone (for isolated environments that can't reach the internet).

Just like KMS, the MAK keys can be found in your VLSC portal. Each MAK has a predefined number of allowed activations, and each activation occurrence will incrementally increase the number of used activation for that MAK.
In the screenshot above, you can see that 3 activations (out of 300 allowed activation/seats) were completed using a MAK for Windows Server 2016 Standard.

How to use MAK

Using MAC for activation is very simple.
First, you'll have to go to VLSC and obtain the suitable MAK for your product (like Windows Server 2016 Standard).
then, open Command Prompt (cmd) in elevated mode and run the following commands:

  1. Install your MAK key using 'slmgr.vbs /ipk MAKProductKey' (e.g slmgr.vbs /ipk ABCDE-123456-ABCDE-123456-ABCDE).
  2. Activate Windows using 'slmgr.vbs /ato'. The following message should appear:
  3. To view the activation details you can use the 'slmgr /dli' command.

When to use it

MAK activation method should be used only for computers that never connect to the corporate network and for environments where the number of physical computers does not meet the KMS activation threshold and Active Directory-based activation could not be used for some reason.

 

Summary

In the first part of the series we learned about KMS and MAK, and we understood the purpose of each activation method.
As a thumb rule, you should always try to stick with KMS activation as long as it possible.
When KMS is not an option (usually due to lack of connectivity to the corporate network), consider using a MAK.

Remember that one KMS host key can be used to activate all of your Windows versions includes servers and clients. Grab the latest version from your VLSC and you're good to go.
If you encounter problems when trying to activate, check that your KMS server is available and running, and use slmgr.vbs tool to get more details about your client's activation status.

In the next posts, we'll cover the Active Directory-based activation and understand how to activate Office using the Volume Activation services. Stay tuned!

Office 365 ProPlus – End to End Servicing in Configuration Manager

$
0
0


The following post was contributed by Cliff Jones a Consultant working for Microsoft.

Background


Recently I was asked by a few of my customers on how to simplify the deployment of Office 365 ProPlus updates in their environment to keep within support but at the same time take advantage of the latest features available with each release.

Both Windows 10 and Office 365 have adopted the servicing model for client updates. This means that new features, non-security updates, and security updates are released regularly, so your users can have the latest functionality and improvements. The servicing model also includes time for enterprise organizations to test and validate releases before adopting them.

By default, Office 365 ProPlus is set to use Semi-Annual Channel, which is also what a lot of customers deploy.

In this blogpost I will focus on the setup of the Automatic Deployment Rule that will be used for the servicing of Office 365 ProPlus configured to use the Semi-Annual Channel.

Solution


System Center Configuration Manager has the ability to manage Office 365 client updates by using the Software Update management workflow.  First we need to confirm all the requirements and prerequisites are in place to be able to deploy the O365 updates.

If you still need to create the O365 Package in SCCM you can have a read through This Blog from Prajwal Desai with all the required steps..


High Level steps to deploy Office 365 updates with Configuration Manager:


  1. Verify the requirements for using Configuration Manager to manage Office 365 client updates:
    • System Center Configuration Manager, update 1602 or later
    • An Office 365 client - Office 365 ProPlus, Visio Online Plan 2 (previously named Visio Pro for Office 365), Project Online Desktop Client, or Office 365 Business
    • Supported channel version for Office 365 client. For more details, see Release information for updates to Office 365 ProPlus
    • Windows Server Update Services (WSUS) 4.0

You can't use WSUS by itself to deploy these updates. You need to use WSUS in conjunction with Configuration Manager

  • The hierarchy's top level WSUS server and the top level Configuration Manager site server must have internet access.
  • On the computers that have the Office 365 client installed, the Office COM object is enabled.
  • Configure software update points to synchronize the Office 365 client updates. Set Updates for the classification and select Office 365 Client for the product. Synchronize software updates after you configure the software update points to use the Updates classification.
  • Enable Office 365 clients to receive updates from Configuration Manager. Use Configuration Manager client settings or group policy to enable the client.

    Method 1: Beginning in Configuration Manager version 1606, you can use the Configuration Manager client setting to manage the Office 365 client agent. After you configure this setting and deploy Office 365 updates, the Configuration Manager client agent communicates with the Office 365 client agent to download the updates from a distribution point and install them. Configuration Manager takes inventory of Office 365 ProPlus Client settings.

    1. In the Configuration Manager console, click Administration > Overview > Client Settings.

    2. Open the appropriate device settings to enable the client agent. For more information about default and custom client settings, see How to configure client settings in System Center Configuration Manager.

    3. Click Software Updates and select Yes for the Enable management of the Office 365 Client Agent setting.

    Method 2: Enable Office 365 clients to receive updates from Configuration Manager by using the Office Deployment Tool or Group Policy.

  • Create Automatic Deployment Rule to deploy the updates using the below steps:


  • Step 1 – Create Office 365 ProPlus Collections


    First we will create a few collections to assist with the management of Office 365 updates. These Collections include: each possible Office Channel, versions released of the Semi-Annual channel and Semi-Annual servicing rings which will be used for the deployments later in the post.



    Office 365 Channels

    Each Collection is defined by the CDNBaseURL which gets populated upon installation. This property is leveraged over other options as it provides the most consistent and accurate definition of the Office Channel.

    The following query rule should be used for each of the Channels. Be sure to update each with the proper CDNBaseURL value:

    select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS on SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.CDNBaseUrl = "http://officecdn.microsoft.com/pr/7ffbc6bf-bc32-4f92-8982-f9dd17fd3114"

    • Monthly Channel
      (formerly Current Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/492350f6-3a01-4f97-b9c0-c7c6ddf67d60

    • Semi-Annual Channel
      (formerly Deferred Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/7ffbc6bf-bc32-4f92-8982-f9dd17fd3114

    • Monthly Channel (Targeted)
      (formerly First Release for Current Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/64256afe-f5d9-4f86-8936-8840a6a4f5be

    • Semi-Annual Channel (Targeted)
      (formerly First Release for Deferred Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/b8f9b850-328d-4355-9145-c59439a0c4cf

    Annotation 2019-01-15 094557


    Office 365 Versions

    To maintain compliance and understand current supported and unsupported clients it is recommended to keep an updated Collection based on the versions of the Semi-Annual Channels.

    When a channel reaches the unsupported time frame the Collection name is updated to reflect this. A new Collection is then created representing the new Semi-Annual release.

    Each Collection query is based on the property call VersionToReport with the Collection limited to All Semi-Annual Channel Clients created in the previous section. The build numbers can found here. The Collection query is structured as:

    Office 365 ProPlus Semi-Annual v1708:

    select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS on SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.VersionToReport like "16.0.8431%"

    Annotation 2019-01-15 093217

    Note: you can also take advantage of this great script to create the collections which also includes some other very useful operational and maintenance collections for SCCM.


    Semi-Annual Channel Servicing Rings

    Depending on the customer, their deployment needs, and timing, the number of Rings will differ. This example will showcase 3 servicing rings each allowing 1 month of deployment availability. This provides time for an Administrator to delay a deployment if an issue is identified.

    The availability date is based on the date when the new version of Semi-Annual Channel is released (Every six months, in January and July) and when the ADR rule is scheduled to run.

    Example servicing breakdown:

    Phase

    Identified Reason

    Availability Date

    Install After Available Date

    Phase 1

    Pilot - IT Organization

    Immediately

    1 Months

    Phase 2

    Identified Office addon\macro Application owners

    +1 Month

    1 Months

    Phase 3

    Remaining machines in the environment

    +2 Months

    1 Months




    Annotation 2019-01-15 094753


    Step 2 - Create Automatic Deployment Rule


    So the last step is now to create the ADR rule that will be used to deploy the O365 updates.

    Unfortunately, there is no way to fully automate the creation of the required Deployments with an Automatic Deployment Rule (ADR) every time a new Semi-Annual Office Channel version is released. This just means that every 6 months an update to the ADR will be needed. This can be as simple as updating the search criteria of the rule to include the latest release version.

    This ADR will be scheduled to run every 6 months on the 3rd Wednesday of the month. This gives the IT Administrator the necessary time to update this rule to reflect the most recent version of Semi-Annual Channel released build.

    Annotation 2019-01-15 100126


    Select the below criteria for the version to be released

    Annotation 2019-01-15 100538


    Set the schedule to run every 6 months on the 3rd Wednesday of the month

    Annotation 2019-01-15 101109


    For the Pilot group it will be available immediately and with deadline of 1 month

    Annotation 2019-01-15 102338


    Select ‘Display In Software Center and show all notifications’

    Annotation 2019-01-15 101305


    Create the deployment package that will contain the O365 updates

    Annotation 2019-01-15 101551


    Step 3 - Create Additional Deployments

    Once the rule has been created add additional deployments for each of the required phases


    • Office 365 ProPlus Updates Phase2 - Identified Office addon\macro Application owners

    Annotation 2019-01-15 103134


    • Office 365 ProPlus Updates Phase3 - Remaining machines in the environment

    Annotation 2019-01-15 105201

    And this will be the end result:

    Annotation 2019-01-15 103739


    Conclusion


    With the increased update cadence, upgrading Office 365 ProPlus improperly is a key concern as it could result in a Customer accidently deploying a Feature Update resulting in unexpected issues – so PROPER testing is critical!!

    So I hope that the above process will help to simplify the deployment of O365 updates as much as possible.

    Maybe there might be some new features in upcoming SCCM releases to even further automate it completely.Smile

    Till the next blog.…

    Cheers Smile

    System Center Configuration Manager Client Health – Toolset to identify and remediate client issues

    $
    0
    0

    Introduction

    I get asked by a lot of customers "How can we reduce the amount of time spent on manually troubleshooting agents, correctly identify what is wrong with the systems, quickly and automatically remediate the issues on the systems"?

    Even though we do have a built-in check for the client that runs daily, as described here, that may not be enough for most Administrators.

    The Solution

    The System Center Configuration Manager Client Health (CMCH) solution has been created within Microsoft by PFE's to address the need that has been identified by engineers for more expansive checks\remediation.

    The CMCH solution provides years of client health knowledge focused on proactive monitoring and automated remediation to ensure that clients are fully functional while reducing risks and increasing reliability. The framework is fully customizable and built with remediation in mind.

    The CMCH solution contains different components for System Center Configuration Manager such as:

    • Custom Collections
    • Configuration Baseline
    • Configuration Items (CIs)
    • Custom Client Health reports

    The CMCH toolset is supported on Configuration Manager Current Branch and Configuration Manager 2012/2012 R2 with the latest Service Pack.

    Key Features and Benefits of CMCH include:

    • a Powerful remediation agent and approximately 26 Client Health focused Configuration Items.

    In all, there are 30+ Client Component issues and 37+ Operating System dependency issues that are addressed.

    Detailed trending analysis identifies systems that are recently confirmed to be on the network but remain unhealthy. The service has a proven track record for scalability and is leveraged in hierarchies with over 200,000 clients.

    Detailed trending analysis identifies systems that are recently confirmed to be on the network but remain unhealthy.


    Technical Highlights

    • Automatic Remediation through the PFE Remediation script and various remediation programs

    • Leverage collections to easily identify issues and target resolutions

    • Trending reports and dashboards

    • Low Network and Database Footprint

    • Right-Click Tools for the Solution

     

    Right-Click tools

     

    Baseline Filtering Collections

    Here we start filtering out noise, to get the PFE Baseline that we use for all additional checks

     

    Client Components

     

    OS Dependency Rules

     

    Custom Configuration Items

     

    Customisation of the Collections\CI's are performed by the PFE during the Delivery, to match the customer's environment (ex: Contoso Antivirus)

    One Main Dashboard

    Conclusion

     

    The Introduction of this solution has allowed SCCM Administrators to more effectively identify, remediate and report on client health issues.

    How can I get this into my SCCM environment?

    If you are a Microsoft Premier customer please reach out to your TAMs for delivery questions!!

     


    Using ‘Scripts’ Feature in Configuration Manager for Ad Hoc WSUS Maintenance

    $
    0
    0

    Background

     

    We all know how important WSUS maintenance is and there are numerous posts on how to automate and the scripts/queries to run.

    Have a read through This amazing Blog from Meghan Stewart | Support Escalation Engineer if you are looking at automating the WSUS Maintenance. It has all the required information on When, Why and How to implement your WSUS Maintenance, as well as having a great PowerShell script to help..

    But… have you ever just needed to kick of WSUS maintenance or SQL defrag remotely and on multiple servers at the same time. I had this requirement a while back where the customer has 150 secondary sites with Software Update Points installed and wanted to do WSUS maintenance but only when they had a change window available and also create a log file in a central share – so NOT fully automated!!

    So I decided to put the ‘scripts’ feature in SCCM to the test and try implement a solution to assist the customerSmile.

     

    Solution

     

    Important Considerations

    Before we get started, it’s important that I mention a few things:

    1. Remember that when doing WSUS maintenance when you have downstream servers, you add to the WSUS servers from the top down, but remove from the bottom up. So if you are syncing/adding updates, they flow into the top (upstream WSUS server) then replicate down to the downstream servers. When you do a cleanup, you are removing things from the WSUS servers, so you should remove from the bottom of the hierarchy and allow the changes to flow up to the top.
    2. It’s important to note that this WSUS maintenance can be performed simultaneously on multiple servers in the same tier. You do however want to make sure that one tier is done before moving onto the next one when doing a cleanup. The cleanup and re-index steps I talk about below should be run on all WSUS servers regardless of whether they are a replica WSUS server or not.
    3. This is a big one. You must ensure that you do not sync your SUPs during this maintenance process as it is possible you will lose some of the work you have already done if you do. You may want to check your SUP sync schedule and set it to manual during this process.

    Step 1 – Create Software Update Point Collections

     

    The first step will be to create the collections that we will run the script against.

     

    Below is the 2 collections for the Primary(Upstream WSUS) and secondary(Downstream WSUS) servers

    Collections

     

    Step 2 – Create Scripts in SCCM

     

    The next step was to create the scripts in SCCM which will be used to run against the above collections created.

    1. Create a share on a server and copy the below .ps1 and .sql files into the share
    2. Create a log file folder beneath that where the output logs will be written to

     

    • WSUS database(SUSDB) Re-index script

      PowerShell Script (SUSDB_Reindex.ps1):

      $Logfile = $env:computername + "_reindex"
      Invoke-sqlcmd -ServerInstance "localhost" -Database "SUSDB" -InputFile "\\ServerName\Share\Scripts\WSUS_Cleanup\SUSDB_reindex.sql" -Verbose *> "C:\Windows\Temp\$Logfile.log"
      cd e:
      copy-item C:\Windows\Temp\$Logfile.log -destination \\ServerName\Share\Scripts\WSUS_Cleanup\Logs\
      exit $LASTEXITCODE
      • script runs “SUSDB_reindex.sql” file against each server in the collection
      • Outputs a logfile to the specified share

      Note: Change Servername, share and e: to the drive letter where scripts are located

       

      • WSUS database(SUSDB) Cleanup

       

      PowerShell Script (SUSDB_Cleanup.ps1):

      $Logfile = $env:computername + "_cleanup"
      Invoke-sqlcmd -ServerInstance "localhost" -Database "SUSDB" -ConnectionTimeout "0" -QueryTimeout "65535" -InputFile "\\Servername\Share\Scripts\WSUS_Cleanup\SUSDB_Cleanup.sql" -Verbose *> "C:\Windows\Temp\$Logfile.log"
      cd e:
      copy-item C:\Windows\Temp\$Logfile.log -destination \\Servername\Share\Scripts\WSUS_Cleanup\Logs
      exit $LASTEXITCODE
      • script runs “SUSDB_Cleanup.sql” file against each server in the collection
      • Outputs a logfile to the specified share

      Note: Change Servername, share and e: to the drive letter where scripts are located

       

      SQL script (SUSDB_Cleanup.sql):

       

      use susdb
      DECLARE @msg nvarchar(100)
      DECLARE @NumberRecords int, @RowCount int, @var1 int
      -- Create a temporary table with an Identity column
      CREATE TABLE #results (RowID INT IDENTITY(1, 1), Col1 INT)
      -- Call the Stored Procedure to get the updates to delete & insert them into the table
      INSERT INTO #results(Col1) 
      EXEC spGetObsoleteUpdatesToCleanup 
      
      
      -- Get the number of records in the temporary table
      SET @NumberRecords = @@ROWCOUNT
      SET @RowCount = 1
      -- Show records in the temporary table
      select * from #results
      -- Loop through all records in the temporary table
      -- using the WHILE loop construct & call the Stored Procedure to delete them
      WHILE @RowCount <= @NumberRecords
      BEGIN
      SELECT @var1 = Col1 FROM #results where RowID = @rowcount
      SET @msg = 'Deleting UpdateID ' + CONVERT(varchar(10), @var1) + ', Rowcount '+ CONVERT(varchar(10), @rowcount)
                     RAISERROR(@msg,0,1) WITH NOWAIT 
       EXEC spDeleteUpdate @localUpdateID=@var1 
       SET @RowCount = @RowCount + 1
      END
       -- Drop the temporary table when completed
      DROP TABLE #results
      

       

      • WSUS Cleanup

       

      PowerShell Script (WSUS_Cleanup.ps1):

      $WSUSServer = @(      (hostname)      )
      Get-WsusServer -Name localhost -PortNumber 8530 | Invoke-WsusServerCleanup -CleanupObsoleteComputers -CleanupObsoleteUpdates -CleanupUnneededContentFiles -CompressUpdates -DeclineExpiredUpdates -Verbose *> "\\Servername\Share\Scripts\WSUS_Cleanup\Logs\$wsusserver _WSUSCleanup.log"
      • script runs WsusServerCleanup against each server in the collection
      • Outputs a logfile to the specified share

      Note: Change Servername, share and e: to the drive letter where scripts are located

       

      Scripts

       

      All that's left is to import them into SCCM.

      1. In the Configuration Manager console, click Software Library.
      2. In the Software Library workspace, click Scripts.
      3. On the Home tab, in the Create group, click Create Script.
      4. On the Script page of the Create Script wizard, configure the following settings:CreateScript
        • Script Name - Enter a name for the script. Although you can create multiple scripts with the same name, using duplicate names makes it harder for you to find the script you need in the Configuration Manager console.
        • Script language - Currently, only PowerShell scripts are supported.
        • Import - Import a PowerShell script into the console. The script is displayed in the Script field.
        • Clear - Removes the current script from the Script field.
        • Script - Displays the currently imported script. You can edit the script in this field as necessary.
      5. Complete the wizard. The new script is displayed in the Script list with a status of Waiting for approval. Before you can run this script on client devices, you must approve it.

       

      Scripts must be approved, by the script approver role, before they can be run. To approve a script:

      1. In the Configuration Manager console, click Software Library.
      2. In the Software Library workspace, click Scripts.
      3. In the Script list, choose the script you want to approve or deny and then, on the Home tab, in the Script group, click Approve/Deny.
      4. In the Approve or deny script dialog box, select Approve, or Deny for the script. Optionally, enter a comment about your decision. If you deny a script, it cannot be run on client devices.
        Script - Approval
      5. Complete the wizard. In the Script list, you see the Approval State column change depending on the action you took.

       

      SCCMScripts

       

      Step 3 – Running The Scripts in SCCM

       

      The final step now once the scripts have been added to SCCM is just to run the scripts and wait….

       

        1. In the Configuration Manager console, click Assets and Compliance.
        2. In the Assets and Compliance workspace, click Device Collections.
        3. In the Device Collections list, click the collection of devices on which you want to run the script.
        4. Select a collection of your choice, click Run Script.
        5. On the Script page of the Run Script wizard, choose a script from the list. Only approved scripts are shown.
        6. Click Next, and then complete the wizard.

      Important

      If a script does not run, for example because a target device is turned off during the one hour time period, you must run it again.

         

        Script Monitoring and Output LogFiles:

         

        1. In the Configuration Manager console, click Monitoring.
        2. In the Monitoring workspace, click Script Status.
        3. In the Script Status list, you view the results for each script you ran on client devices. A script exit code of 0 generally indicates that the script ran successfully.
          • Beginning in Configuration Manager 1802, script output is truncated to 4 KB to allow for better display experience. Script monitor - Truncated Script

         

        Below is the Output from the scripts run above:

        reindex_log

         

        cleanup_log

         

        WSUScleanup_log

         

        Conclusion

         

        In this post, I demonstrated how we can use the ‘scripts’ feature in SCCM to initiate WSUS Cleanup scripts on demand.  Hopefully this is helpful to you but also shows you the capability of the feature for almost anythingOpen-mouthed smile .  Till next time…

         

        Disclaimer – All scripts and reports are provided ‘AS IS’
        This sample scripts are not supported under any Microsoft standard support program or service. This sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of these sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of these scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use these sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

        Understanding Volume Activation Services – Part 2 (Active Directory-Based Activation)

        $
        0
        0

        In the previous part of the series, we talked about KMS, MAK, and how to choose between the two when looking for the right activation method in your environment.
        Today, we are going to talk about Active Directory-based activation or ADBA in short.

        Series:

        • Part 1 - KMS and MAK
        • Part 2 - Active Directory-Based Activation
        • Part 3 - Office Considerations & License Activation Troubleshooting

         

        What is exactly ADBA and why do you need it?

        Like KMS, Active Directory-based activation (ADBA) is used to activate Windows and Office in your corporate network.
        ADBA is a more reliable and redundant solution, and it has significant advantages compares to KMS which make it the best option for activating clients machines.
        As you can guess by its name, ADBA relies on Active Directory Domain Services to store activation objects and transparently activate domain-joined computers.

        Prerequisites

        There are few prerequisites for using Active Directory-based activation:

        • Schema version must be updated to at least Windows Server 2012.
          • There's NO need for upgrading the forest or domain functional levels.
          • Older Domain Controllers (like DCs running Windows Server 2008R2) will be able to activate clients using ADBA as long as the schema is updated.
        • Computers who would like to activate against ADBA must be:
          • Domain-joined to one of the forest domains (ADBA is a forest-wide feature).
          • Running a Windows Server 2012/Windows 8.1 and above. Older operating systems (including Windows Server 2008R2, Windows 7) are NOT supported.

        ADBA Vs. KMS

        There are some major advantages for using ABDA over KMS:

        • No thresholds - Unlike KMS, ADBA does not require any minimum thresholds for start activating clients.
          Any clients request for activation is immediately activated by ADBA as long as there is a suitable activation object in the Active Directory.
        • Eliminate the need for SRV record and dedicated port - As we learned in the previous post, the KMS server is listening on port 1688 for client's activation requests.
          Clients can find KMS server based on the _VLMCS SRV records located in the DNS.
          When using ADBA, clients are looking for activation objects in the Active Directory by using LDAP, and the communication is based on the default domain services ports. No dedicated ports, neither SRV records are needed.
        • High availability - Active Directory-based activation is, by design, a high availability activation method. Any Domain Controller which is part of the forest can be used to activate a client. You won't need to create a dedicated server for KMS host anymore.

        While ADBA has significant  advantages, it also has a few drawbacks:

        • No support for older Windows versions - ADBA can only activate Windows Server 2012/Windows 8.1 and above. Therefore, as long as your environment still includes older Windows versions like Windows Server 2008R2 and Windows 7, you'll have to keep maintain other activation methods like KMS and MAK.
        • Domain-joined only - ADBA can activate domain-joined computers only. In other words, any workgroup machine or machine that belong to a different AD forest cannot be activated using the ADBA.

        The good news is that ADBA and KMS can live together. You can use ADBA to activate new versions of Windows and Office and maintain a KMS host servers for activating old Windows and Office versions like Windows 2008R2, Windows 7 and Office 2010.

        This might be a good opportunity to remind you that Windows Server 2008R2 and Windows 7 will become out of support on January 14, 2020.

        Deploying Active Directory-based activation

        In order to deploy Active Directory-based activation, we are going to use the same Volume Activation Tools feature we used to deploy the KMS host.
        It is recommended to run the Volume Activation Tools from a management/administrative machine running Windows Server 2019. If you are running the Volume Activation Tools from a Windows Server 2016 or Windows Server 2012R2, please install the following KB's before you continue (KBs are required for activating the newest Windows 10 Enterprise LTSC and Windows Server 2019):

        For Windows 2016:

        1. KB4132216 - Servicing stack update, May 18 (Information, Download).
        2. KB4467684 - November Cumulative Update or above (Information, Download).

        For Windows 2012R2:

        1. KB3173424 - Servicing stack update (Information, Download).
        2. KB4471320 - December 2018 Monthly Rollup (Information, Download).

        ADBA uses the KMS host key for activating clients. Yes, it's still called that name, as the KMS host key is used for both Active Directory-based activation and KMS activation method.
        The KMS host key can be obtained from Microsoft VLSC.
        Remember that you should only obtain and install the latest KMS host key for Windows Server available in VLSC. This is because:

        • A KMS host key of newer Windows version can be used to activate older Windows versions.
        • A Windows server KMS host key can be used to activate Windows clients.

        When the Volume Activation Tools opens, skip the introduction phase and choose 'Active Directory-Based Activation' as your volume activation method.
        Pay attention that you must be a member of the 'Local Administrators' group on the computer running the Volume Activation Tools. You also need to be a member of the 'Enterprise Administrators' group, because the activation objects are created in the 'Configuration' partition in the Active Directory.

        In the next step, you'll be asked to provide the KMS host key you obtained from the VLSC. Once again, this is the exact same key you used to activate the KMS host.
        It is recommended to enter a display name for your new activation object. The display name should reflect the product and its version (e.g. 'WindowsServer2019Std').

        Managing Active Directory-based activation

        To be honest, there's no much to administer and manage in ADBA.
        From time to time, you'll be required to install a new activation object for a new version of Windows or Office, but that's all.
        However, if you would like to view and delete currently installed activation objects, you can use either the Volume Activation Tools or the ADSI Edit (adsiedit.msc).

        Using the Volume Activation Tools, select 'Active Directory-Based Activation', click 'Next' and choose 'Skip to Configuration'.

        In the next screen, you can see the installed activation objects, including their display name and partial product key.
        If you would like to delete an activation object, just select the 'Delete' checkbox next to it and click 'Commit'.

        If you would like to see the activation objects in Active Directory, use adsiedit.msc to open the 'Configuration' partition, and navigate to Services\Microsfot SPP\Activation Objects.
        You can see that the object class is 'msSPP-AcitivationObject', and you can identify the object easily by using the displayName value in the 'Attribute Editor'.

        Reviewing ADBA client's settings

        After you enable ADBA and create the activation object in your Active Directory, supported client computers which configured to use 'KMS Client Channel' will be activated automatically against the ADBA.
        The activation is given for a 180 days period, and clients machines will try to reactivate every 7 days (just like in KMS).
        If for some reason, the ADBA activation failed (e.g activation object can't be found/does not support the client OS), the client will try to use KMS activation as an alternative.

        You can still use the slmgr.vbs script to manage and view activation settings.
        Run 'slmgr.vbs /dli' to display the activation status. Pay attention to the "AD Activation client information", which indicates that the client was activated using ADBA.

        Other slmgr.vbs commands like 'slmgr /ipk' and 'slmgr /ato' can still be used to manipulate and configure the activation settings in the client machine.

        Summary

        Active Directory-based activation should be your top priority when considering volume activation models.
        As it goes hand by hand with Active Directory, it provides you with high availability and eliminates the need for a dedicated server for activation.
        ADBA is also great for small environments, where the number of computers does not meet the KMS activation threshold.
        Remember that you can run ADBA next to KMS  if you still have earlier operating systems or workgroup computers in your network.

         

        In the last post of the series, we'll talk about Office activation and how to troubleshoot activation issues in your environment.

         

        Field Notes: The case of the failed SQL Server Failover Cluster Instance – Binaries Disks Added to Cluster

        $
        0
        0

        I paid a customer a visit a while ago and was requested to assist with a SQL Server Failover Cluster issue they were experiencing.  They had internally transferred the case from the SQL team to folks who look after the Windows Server platform as they could not pick up anything relating to SQL during initial troubleshooting efforts.

        My aim in this post is to:

        • explain what the issue was (adding disks meant to be local storage to the cluster)
        • provide a little bit of context on cluster disks and asymmetric storage configuration
        • discuss how the issue was resolved by removing the disks from cluster

        Issue definition and scope

        An attempt to move the SQL Server role/group from one node to another in a 2-node Failover Cluster failed.  This is what they observed:

        Failed SQL Server Group

        From the image above, it can be seen that all disk resources are online.  Would you suspect that storage is involved at this stage?  In cluster events, there was the standard Event ID 1069 confirming that the cluster resource 'SQL Server' of type 'SQL Server' in clustered role 'SQL Server (MSSQLSERVER)' failed.  Additionally, this is what was in the cluster log – “failed to start service with error 2”:

        Cluster Log

        Error code 2 means that the system cannot find the file specified:

        Net HelpMsg

        A little bit of digging around reveals that this is the image path we are failing to get to:

        Registry value

        Now that we have all this information, let’s look at how you would resolve this specific issue we were facing.  Before that however, I would like to provide a bit of context relating to cluster disks, especially on Asymmetric Storage Configuration.

        Context

        Consider a 2 node SQL Server Failover Cluster Instance running on a Windows Server 2012 R2 Failover Cluster with the following disk configuration:

        • C drive for the Operating System – each of the nodes has a direct attached disk
        • D drive for SQL binaries – each of the nodes has a dedicated “local” drive, presented from a Storage Area Network (SAN)
        • All the other drives required for SQL are shared drives presented from the SAN

        Disks in Server Manager

        Note: The 20 GB drive is presented from the SAN and is not added to the cluster at this stage.

        I used Hyper-V Virtual Machines to reproduce this issue in a lab environment.  For the SAN part, I used the iSCSI target that is built-in to Windows Server.

         

        Asymmetric Storage Configuration

        A feature enhancement in Failover Clustering for Windows Server 2012 and Windows Server 2012 R2 is that it supports an Asymmetric Storage Configuration.  In Windows Server 2012 a disk is considered clusterable if it is presented to one or more nodes, and is not the boot / system disk, or contain a page file.  https://support.microsoft.com/en-us/help/2813005/local-sas-disks-getting-added-in-windows-server-2012-failover-cluster

         

        What happens when you Add Disks to Cluster?

        Let us first take a look at the disks node in Failover Cluster Manager (FCM) before adding the disks.

        Disks in Failover Cluster Manager

        Here’s what we have (ordered by the disk number column):

        • The Failover Cluster Witness disk (1 GB)
        • SQL Data (50 GB)
        • SQL Logs (10 GB)
        • Other Stuff (5 GB)

        The following window is presented when an attempt to add disks to a cluster operation is performed in FCM:

        Add Disks to a Cluster

        Both disks are added as cluster disks when one clicks OK at this stage.  After adding the disks (which are not presented to both nodes), we see the following:

        Disks in Failover Cluster Manager

        Nothing changed regarding the 4 disks we have already seen in FCM, and the two “local” disks are now included:

        • Cluster Disk 1 is online on node PTA-SQL11
        • Cluster Disk 2 is offline on node PTA-SQL11 as it is not physically connected to the node

        At this stage, everything still works fine as the SQL binaries volume is still available on this node.  Note that the "Available Storage” group is running on PTA-SQL11.

         

        What happens when you move the Available Storage group?

        Move Available Storage

        Let’s take a look at FCM again:

        Disks in Failover Cluster Manager

        Now we see that:

        • Cluster Disk 1 is now offline
        • Cluster Disk 2 is now online
        • The owner of the “Available Storage” group is now PTA-SQL12

        This means that PTA-SQL12 can see the SQL binaries volume and PTA-SQL11 cannot, which causes downtime.  Moving the SQL group to PTA-SQL12 works just fine as the SQL binaries drive is online on that node.  You may also want to ensure that the resources are configured to automatically recover from failures.  Below is an example of default configuration on a resource:

        Resource Properties

         

        Process People and Technology

        It may appear that the technology is at fault here, but the Failover Cluster service does its bit to protect us from shooting ourselves in the foot, and here are some examples:

        Validation

        The Failover Cluster validation report does a good job in letting you know that disks are only visible from one node.  By the way, there’s also good information here on what’s considered for a disk to be clustered.

        Validation Report

        A warning is more like a “proceed with caution” when looking at a validation report.  Failures/errors mean that the solution does not meet requirements for Microsoft support.  Also be careful when validating storage as services may be taken offline.

         

        Logic

        In the following snippet from the cluster log, we see an example of the Failover Cluster Resource Control Manger (RCM) prevent the move of the “Available Storage” group to prevent downtime.

        Cluster Log

        Back online and way forward

        To get the service up and running again, we had to remove both Disk 1 and Disk 2 as cluster disks and make them “local” drives again.  The cause was that an administrator had added disks that were not meant to be part of the cluster as clustered disks.

        Disks need to be made online from a tool such as the Disk Management console as they are automatically placed in an offline state to avoid possible issues that may be caused by having a non-clustered disk online on two or more nodes in a shared disk scenario.

        I got curious after this and reached out to folks who specialize in SQL server to get their views on whether the SQL binaries drive should or should not be shared.  One of the strong views is to keep them as a non-shared (non-clustered) drives, especially for cases on SQL patching.  What happens if SQL patching fails in a shared drive scenario for example?

        Anyway, it would be great to hear from you through comments.

        Till next time…

        Step by step MIM PAM setup and evaluation Guide – Part 3

        $
        0
        0

        This is third part of the series. In the previous posts we have prepared test environment for PAM deployment, created and configured all needed service accounts, installed SQL Server and prepared PIM server for further installation. Now we have two forests – prod.contoso.com and priv.contoso.com. In PROD we have set up Certificate services, Exchange server, ADFS services and configured two test applications – one is using Windows Integrated Authentication and the second Claim based Authentication. In PRIV forest we have PAM server prepared for MIM/PAM deployment with SQL server ready.

        Series:

        Installing PAM Server

        1. Install SharePoint 2016
          1. a. Download SharePoint 2016 Prerequisites
          2. Please download following binaries into one selected folder (for example C:\Setup\Software\SP2016-Prerequisites) on the PRIV-PAM server

            Cumulative Update 7 (KB3092423) for Microsoft AppFabric 1.1 for Windows Server [https://www.microsoft.com/en-us/download/details.aspx?id=49171]

            Microsoft Identity Extensions [http://go.microsoft.com/fwlink/?LinkID=252368]

            Microsoft ODBC Driver 11 for SQL Server [http://www.microsoft.com/en-us/download/details.aspx?id=36434]

            Microsoft Information Protection and Control Client [http://go.microsoft.com/fwlink/?LinkID=528177]

            Microsoft SQL Server 2012 Native Client [http://go.microsoft.com/fwlink/?LinkID=239648&clcid=0x409]

            Microsoft Sync Framework Runtime v1.0 SP1 (x64) [http://www.microsoft.com/en-us/download/details.aspx?id=17616] – Open SyncSetup_en.x64.zip and extract to this folder only Synchronization.msi

            Visual C++ Redistributable Package for Visual Studio 2013 [http://www.microsoft.com/en-us/download/details.aspx?id=40784]

            Visual C++ Redistributable for Visual Studio 2015 [https://www.microsoft.com/en-us/download/details.aspx?id=48145]

            Microsoft WCF Data Services 5.0 [http://www.microsoft.com/en-us/download/details.aspx?id=29306]

            Windows Server AppFabric 1.1 [http://www.microsoft.com/en-us/download/details.aspx?id=27115]

            At the end You will need to have in the selected folder following binaries:

        • AppFabric-KB3092423-x64-ENU.exe
        • MicrosoftIdentityExtensions-64.msi
        • msodbcsql.msi
        • setup_msipc_x64.msi
        • sqlncli.msi
        • Synchronization.msi
        • vcredist_x64.exe
        • vc_redist.x64.exe
        • WcfDataServices.exe
        • WindowsServerAppFabricSetup_x64.exe
      1. Install SharePoint Prerequisites
      2. Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

        Open PowerShell ISE as an Admin and paste following script:

        $spPrereqBinaries = 'C:\Setup\Software\SP2016-Prerequisites'

        $sharePointBinaries = 'C:\Setup\Software\SharePoint2016'

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host "[OK] $Command was successful" }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host "[Warning]Please rerun script after restart of the server"

        Restart-Computer -Confirm

        }

        else { Write-Host "[Error] Failed to run $Command" }

        }

        catch {

        Write-Host "[Error] Failed to run $Command"

        Write-Host ("`t`t`t{0}" -f $_.Exception.Message)

        }

        }

        }

        $arguments = "/sqlncli:`"$spPrereqBinaries\sqlncli.msi`" "

        $arguments += "/idfx11:`"$spPrereqBinaries\MicrosoftIdentityExtensions-64.msi`" "

        $arguments += "/sync:`"$spPrereqBinaries\Synchronization.msi`" "

        $arguments += "/appfabric:`"$spPrereqBinaries\WindowsServerAppFabricSetup_x64.exe`" "

        $arguments += "/kb3092423:`"$spPrereqBinaries\AppFabric-KB3092423-x64-ENU.exe`" "

        $arguments += "/msipcclient:`"$spPrereqBinaries\setup_msipc_x64.msi`" "

        $arguments += "/wcfdataservices56:`"$spPrereqBinaries\WcfDataServices.exe`" "

        $arguments += "/odbc:`"$spPrereqBinaries\msodbcsql.msi`" "

        $arguments += "/msvcrt11:`"$spPrereqBinaries\vc_redist.x64.exe`" "

        $arguments += "/msvcrt14:`"$spPrereqBinaries\vcredist_x64.exe`""

        Run-SystemCommand -Command "$sharePointBinaries\prerequisiteinstaller.exe" -Arguments $arguments -RestartIfNecessary $true -RestartResult 3010

        Replace $spPrereqBinaries value with path where your prerequisite binaries are located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Result should confirm successful installation. In case server restarts, after restart run again previous command

        Repeat until restart is not needed.

        Restart PRIV-PAM server.

      3. Create SharePoint Server 2016 Installation configuration file
      4. Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

        In the Notepad paste following:

        <Configuration>

        <Package Id="sts">

        <Setting Id="LAUNCHEDFROMSETUPSTS" Value="Yes" />

        </Package>

        <Package Id="spswfe">

        <Setting Id="SETUPCALLED" Value="1" />

        </Package>

        <Logging Type="verbose" Path="%temp%" Template="SharePoint Server Setup(*).log" />

        <PIDKEY Value="RTNGH-MQRV6-M3BWQ-DB748-VH7DM" />

        <Display Level="none" CompletionNotice="no" />

        <Setting Id="SERVERROLE" Value="SINGLESERVER" />

        <Setting Id="USINGUIINSTALLMODE" Value="1" />

        <Setting Id="SETUP_REBOOT" Value="Never" />

        <Setting Id="SETUPTYPE" Value="CLEAN_INSTALL" />

        </Configuration>

        In the configuration I have added SharePoint 2016 evaluation key for Standard version. You are free to replace key with your license key

        Save file as config.xml to chosen location.

      5. Install SharePoint
      6. Open PowerShell ISE as an Admin and paste following script:

        $sharePointBinaries = 'C:\Setup\Software\SharePoint2016'

        $configPath = 'C:\Setup'

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host "[OK] $Command was successful" }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host "[Warning]Please rerun script after restart of the server"

        Restart-Computer -Confirm

        }

        else { Write-Host "[Error] Failed to run $Command" }

        }

        catch {

        Write-Host "[Error] Failed to run $Command"

        Write-Host ("`t`t`t{0}" -f $_.Exception.Message)

        }

        }

        }

        Run-SystemCommand -Command "$sharePointBinaries\setup.exe" -Arguments "/config $configPath\config.xml" -RestartIfNecessary $true -RestartResult 30030

        Replace $ configPath value with path where config file created in previous step is located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Wait until script finishes - it won't display installation progress.Result should confirm successful installation.

      7. Create SharePoint Site
        1. Request, issue and install SSL certificate
        2. Open PowerShell ISE as an Admin and paste following script:

          $file = @"

          [NewRequest]

          Subject = "CN=pamportal.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog"

          MachineKeySet = TRUE

          KeyLength = 2048

          KeySpec=1

          Exportable = TRUE

          RequestType = PKCS10

          [RequestAttributes]

          CertificateTemplate = "WebServerV2"

          "@

          Set-Content C:\Setup\certreq.inf $file

          Invoke-Expression -Command "certreq -new C:\Setup\certreq.inf C:\Setup\certreq.req"

          (Replace C:\Setup with folder of your choice – in this folder we will save request file)

          Run above script and respond to message boxes prompt “Template not found. Do you wish to continue anyway?” with “Yes”.

          Copy C:\Setup\certreq.req to corresponding folder on PROD-DC server.

          Log on to PROD-DC as an administrator

          Open command prompt as an admin.

          Run following command:

          certreq -submit C:\Setup\certreq.req C:\Setup\pamportal.contoso.com.cer

          Here C:\Setup is folder where certificate request file is placed – modify path according to your location.

          Confirm CA when prompted

          Now we have in C:\Setup certificate file C:\Setup\pamportal.contoso.com.cer. Copy that file back to PRIV-PAM server.

          Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

          Run PowerShell as Admin and execute following:

          $cert = Import-Certificate -CertStoreLocation Cert:\LocalMachine\my -FilePath C:\Setup\pamportal.contoso.com.cer

          $guid = [guid]::NewGuid().ToString("B")

          $tPrint = $cert.Thumbprint

          netsh http add sslcert hostnameport="pamportal.contoso.com:443" certhash=$tPrint certstorename=MY appid="$guid"

        3. Run script to create SharePoint Site where PAM Portal will be placed.
        4. Open PowerShell ISE as an Admin and paste following script:

          $Passphrase = 'Y0vW8sDXktY29'

          $password = 'P@$$w0rd'

          Add-PSSnapin Microsoft.SharePoint.PowerShell

          #

          #Initialize values required for the script

          $SecPhassphrase = (ConvertTo-SecureString -String $Passphrase -AsPlainText -force)

          $FarmAdminUser = 'PRIV\svc_PAMFarmWSS'

          $svcMIMPool = 'PRIV\svc_PAMAppPool'

          #

          #Create new configuration database

          $secstr = New-Object -TypeName System.Security.SecureString

          $password.ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}

          $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $FarmAdminUser, $secstr

          New-SPConfigurationDatabase -DatabaseName 'MIM_SPS_Config' -DatabaseServer 'SPSSQL' -AdministrationContentDatabaseName 'MIM_SPS_Admin_Content' -Passphrase $SecPhassphrase -FarmCredentials $cred -LocalServerRole WebFrontEnd

          #

          #Create new Central Administration site

          New-SPCentralAdministration -Port '2016' -WindowsAuthProvider "NTLM"

          #

          #Perform the config wizard tasks

          #Install Help Collections

          Install-SPHelpCollection -All

          #Initialize security

          Initialize-SPResourceSecurity

          #Install services

          Install-SPService

          #Register features

          Install-SPFeature -AllExistingFeatures

          #Install Application Content

          Install-SPApplicationContent

          #

          #Add managed account for Application Pool

          $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $svcMIMPool, $secstr

          New-SPManagedAccount -Credential $cred

          #

          #Create new ApplicationPool

          New-SPServiceApplicationPool -Name PAMSPSPool -Account $svcMIMPool

          #

          #Create new Web Application.

          #This creates a Web application that uses classic mode windows authentication.

          #Claim-based authentication is not supported by MIM

          New-SPWebApplication -Name 'PAM Portal' -Url "https://pamportal.contoso.com" -Port 443 -HostHeader 'pamportal.contoso.com' -SecureSocketsLayer:$true -ApplicationPool "PAMSPSPool" -ApplicationPoolAccount (Get-SPManagedAccount $($svcMIMPool)) -AuthenticationMethod "Kerberos" -DatabaseName "PAM_SPS_Content"

          #

          #Create new SP Site

          New-SPSite -Name 'PAM Portal' -Url "https://pamportal.contoso.com" -CompatibilityLevel 15 -Template "STS#0" -OwnerAlias $FarmAdminUser

          #

          #Disable server-side view state. Required by MIM

          $contentService = [Microsoft.SharePoint.Administration.SPWebService]::ContentService

          $contentService.ViewStateOnServer = $false

          $contentService.Update()

          #

          #configure SSL

          Set-WebBinding -name "PAM Portal" -BindingInformation ":443:pamportal.contoso.com" -PropertyName "SslFlags" -Value 1

          #Add Secondary Site Collection Administrator

          Set-SPSite -Identity "https://pamportal.contoso.com" -SecondaryOwnerAlias "PAMAdmin"

      8. Install MIM Service, MIM Portal and PAM
      9. Open Command prompt as an Admin and run following command

        msiexec.exe /passive /i "C:\Setup\Software\MIM2016SP1RTM\Service and Portal\Service and Portal.msi" /norestart /L*v C:\Setup\PAM.LOG ADDLOCAL="CommonServices,WebPortals,PAMServices" SQMOPTINSETTING="1" SERVICEADDRESS="pamsvc.contoso.com" FIREWALL_CONF="1" SHAREPOINT_URL="https://pamportal.contoso.com" SHAREPOINTUSERS_CONF="1" SQLSERVER_SERVER="SVCSQL" SQLSERVER_DATABASE="FIMService" EXISTINGDATABASE="0" MAIL_SERVER="mail.contoso.com" MAIL_SERVER_USE_SSL="1" MAIL_SERVER_IS_EXCHANGE="1" POLL_EXCHANGE_ENABLED="1" SERVICE_ACCOUNT_NAME="svc_PAMWs" SERVICE_ACCOUNT_PASSWORD="P@$$w0rd" SERVICE_ACCOUNT_DOMAIN="PRIV" SERVICE_ACCOUNT_EMAIL="svc_PAMWs@prod.contoso.com" REQUIRE_REGISTRATION_INFO="0" REQUIRE_RESET_INFO="0" MIMPAM_REST_API_PORT="8086" PAM_MONITORING_SERVICE_ACCOUNT_DOMAIN="PRIV" PAM_MONITORING_SERVICE_ACCOUNT_NAME="svc_PAMMonitor" PAM_MONITORING_SERVICE_ACCOUNT_PASSWORD="P@$$w0rd" PAM_COMPONENT_SERVICE_ACCOUNT_DOMAIN="PRIV" PAM_COMPONENT_SERVICE_ACCOUNT_NAME="svc_PAMComponent" PAM_COMPONENT_SERVICE_ACCOUNT_PASSWORD="P@$$w0rd" PAM_REST_API_APPPOOL_ACCOUNT_DOMAIN="PRIV" PAM_REST_API_APPPOOL_ACCOUNT_NAME="svc_PAMAppPool" PAM_REST_API_APPPOOL_ACCOUNT_PASSWORD="P@$$w0rd" REGISTRATION_PORTAL_URL="http://localhost" SYNCHRONIZATION_SERVER_ACCOUNT="PRIV\svc_MIMMA" SHAREPOINTTIMEOUT="600"

        ("C:\Setup\Software\MIM2016SP1RTM\Service and Portal\Service and Portal.msi" replace with path to Service and Portal installation path, C:\Setup\PAM.LOG replace with path where installation log will be placed)

        When installation finishes open C:\Setup\PAM.LOG file in Notepad and goto the end of the file. You should find line

        … Product: Microsoft Identity Manager Service and Portal -- Installation completed successfully.

        Open Internet Explorer and navigate to https://pamportal.contoso.com/IdentityManagement

        Portal should be loaded:

        clip_image002

        Restart the PRIV-PAM server

      10. Configure SSL for pamapi.contoso.com
        1. Request, issue and install SSL certificate for the portal
        2. Open PowerShell ISE as an Admin and paste following script:

          $file = @"

          [NewRequest]

          Subject = "CN=pamapi.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog"

          MachineKeySet = TRUE

          KeyLength = 2048

          KeySpec=1

          Exportable = TRUE

          RequestType = PKCS10

          [RequestAttributes]

          CertificateTemplate = "WebServerV2"

          "@

          Set-Content C:\Setup\certreq.inf $file

          Invoke-Expression -Command "certreq -new C:\Setup\certreq.inf C:\Setup\certreq.req"

          (Replace C:\Setup with folder of your choice – in this folder we will save request file)

          Run above script and respond to message boxes with “OK”.

          Copy C:\Setup\certreq.req to corresponding folder on PROD-DC server.

          Log on to PROD-DC as an administrator

          Open command prompt as an admin.

          Run following command:

          certreq -submit C:\Setup\certreq.req C:\Setup\pamapi.contoso.com.cer

          Here C:\Setup is folder where certificate request file is placed – modify path according to your location.

          Confirm CA when prompted

          Now we have in C:\Setup certificate file C:\Setup\pamapi.contoso.com.cer. Copy that file back to PRIV-PAM server.

          Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

          Run PowerShell as Admin and execute following:

          $cert = Import-Certificate -CertStoreLocation Cert:\LocalMachine\my -FilePath C:\Setup\pamapi.contoso.com.cer

          $guid = [guid]::NewGuid().ToString("B")

          $tPrint = $cert.Thumbprint

          netsh http add sslcert hostnameport="pamapi.contoso.com:8086" certhash=$tPrint certstorename=MY appid="$guid"

        3. Configure SSL on pamapi.contoso.com
        4. Run PowerShell as Admin and execute following:

          Set-WebBinding -Name 'MIM Privileged Access Management API' -BindingInformation ":8086:" -PropertyName Port -Value 8087

          New-WebBinding -Name "MIM Privileged Access Management API" -Port 8086 -Protocol https -HostHeader "pamapi.contoso.com" -SslFlags 1

          Remove-WebBinding -Name "MIM Privileged Access Management API" -BindingInformation ":8087:"

        Conclusion of Part 3

        Now we are ready for the Part 4 - Installing PAM Example portal.

        In this exercise we went step by step through PAM Portal set up. If you carefully followed all steps you have healthy and well configured PAM deployment.

        We didn’t spent time on Portal customization and branding, what I leave to you for the future.

        In the Part 4 we will set up PAM Example Portal.

        Until then

        Have a great week

        Disclaimer – All scripts and reports are provided ‘AS IS’

        This sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.

        Most Common Mistakes in Active Directory and Domain Services – Part 3

        $
        0
        0

        This blog post is the third (and last) part in the 'Most Common Mistakes in Active Directory In Domain Services" series.
        In the previous parts, we covered some major mistake like configuring multiple password policies using GPO and keeping FFL/DFL at a lower version.
        The 3'rd part of the series is no exception. we'll go on and review three additional mistakes and summarize this series.

        Series:

        Mistake #7: Installing Additional Server Roles and Applications on a Domain Controller

        When I review a customer's Active Directory environment, I often find additional Windows Server roles (other than the default ADDS and DNS roles) installed on one or more of the Domain Controllers.

        This can be any role - from RDS Licensing, through Certificate Authority and up to DHCP Server. Beside Windows Server roles, I also find special applications and features running on the Domain Controllers, like KMS (Key Management Service) host for volume activation, or Azure AD Connect for integrating on-premises directories with Azure AD.

        There is a wide variety of roles and applications which administrators install on the Domain Controllers, but there is one thing common to all of them: Domain Controllers are NOT the place for them.

        By default, any Domain Controller in a domain provides the same functionality and features as the others, what makes the Active Directory Domain Services not be affected if one Domain Controller becomes unavailable.
        Even in a case where the Domain Controller holding the FSMO roles becomes unavailable, the Domain Services will continue to work as expected for most scenarios (at least in the short-term).

        When you install additional roles and applications on your Domain Controllers, two problems are raised:

        1. Domain Controllers with additional roles and features become unique and different compares to other Domain Controllers. If any of these Domain Controllers will be turned off or get damaged, its roles and features might be affected and become unavailable. This, in fact, creates a dependency between ADDS and other roles and affect the redundancy of the Active Directory Domain Services.
        2. Upgrading your Active Directory environment becomes a much more complicated task. A DHCP Server or a Certificate Authority roles installed on your Domain Controllers will enforce you to deal with them first, and only then move forward and upgrade the Active Directory itself. This complexity might also affect other tasks like restoring a Domain Controller or even put a Domain Controller into maintenance.

        This is why putting additional roles and applications on your Domain Controllers is not recommended for most cases.
        You can use the following PowerShell script to easily get a report with your Domain Controllers installed roles. Pay attention that this script is working only for Windows Server 2012 and above. For Windows Server 2008, you can use WMI Query.

         

        Bottom Line: Domain Controllers are designed to provide directory services for your users - allowing access to domain resources and respond to security authentication requests.
        Mixing Active Directory Domain Services with other roles and applications creates a dependency between the two, affect Domain Controller performance and make the administrative tasks a much more complicated.

        Do It Right: Use Domain Controllers for Active Directory Domain Services only, and install additional roles (let it be KMS or a DHCP server) on different servers.

        Mistake #8: Deploying Domain Controllers as a Windows Server With Desktop Experience 

        When you install Windows Server, you can choose between two installation options:

        • Windows Server with Desktop Experience - This is the standard user interface, including desktop, start menu, etc.
        • Windows Server - This is the Server Core, which leaving the standard user interface in favor of command line.

        Although Windows Server Core has some major advantages compares to Desktop Experience, most administrators are still choosing to go with the full user interface, even for the most convenient and supported server roles like Active Directory Domain Services, Active Directory Certificate Services, and DHCP Server.

        Windows Core is not a new option, and it has been here since Windows Server 2008R2. It works great for the supported Windows roles and has some great advantages compares to the Windows Server with Desktop Experience. Here are the most significant ones:

        • Reduce potential attack surface and lower the chance for user mistakes - Windows Server Core reduces the potential attack surface by eliminating binaries and features which does not require for the supporting roles (Active Directory Domain Services in our case).
          For example, the Explorer shell is not installed, which of curse reduces the risks and exploits that can be manipulated and used to attack the server.
          Other than that, when customers are using Windows Server with Desktop Experience for Active Directory Domain Services, they are also usually performing administrative tasks directly on their Domain Controllers using Remote Desktop.
          This is a very bad habit as it may have a significant impact over the Domain Controllers performance and functionality. It might also cause a Domain Controller to become unavailable by accidentally turn it off or running a heavy PowerShell script which drains the server's memory.
        • Improve administrative skills while still be able to use the GUI tools - by choosing Windows Server Core, you'll probably get the chance to use some PowerShell cmdlets and improve your PowerShell and scripting skills.
          Some customers think that this is the only way to manage and administer the server and its role, but that's not true.
          Alongside the Command Line options, you'll find some useful remote management tools, including Windows Admin Center, Server Manager, and Remote Server Administration Tools (RSAT).
          In our case, the RSAT includes all the Active Directory Administrative tool like the Active Directory Users and Computers (dsa.msc) and the ADSI Editor (adsiedit.msc).
          It also important to be familiar with the 'Server Core App Compatibility Feature on Demand' (FOD), which can be used to increase Windows Server Core 2019 compatibility with other applications and to provide administrative tools for troubleshooting scenarios.
          My recommendation is to deploy an administrative server for managing all domain services roles, including Active Directory Domain Services, DNS, DHCP, Active Directory Certificate Services, Volume Activation, and others. 
        • Other advantages like reducing disk space and memory usage are also here, but they, by themselves, are not the reason for using Windows Server Core.

        You should be aware that unlike Windows Server 2o12R2, you cannot convert Windows Server 2016/2019 between Server Core and Server with Desktop Experience after installation.

        Bottom Line: Windows Server Core is not a compromise. For the supported Windows Server roles, it is the official recommendation by Microsoft. Using Windows Server with Full Desktop Experience increases the chances that your Domain Controllers will get messy and will be used for administration tasks rather than providing domain services.

        Do It Right: Install your Domain Controllers as a Windows Server Core, and use remote management tools to administer your domain resources and configuration. Consider deploying one of your Domain Controller as a Windows Server with Full Desktop Experience for forest recovery scenarios.

        Mistake #9: Use Subnets Without Mapping them to Active Directory sites

        Active Directory uses sites for many purposes. One of them is to inform clients about Domain Controllers available within the closest site as the client.

        For doing that, each site is associated with the relevant subnets, which correspond to the range of IP addresses in the site. You can use Active Directory Sites and Services to manage and associate your subnets. 

        When a Windows domain client is looking for the nearest Domain Controller (what's known as the DC Locator process), the Active Directory (or more precisely, the NetLogon in one of the Domain Controllers) is looking for the IP address of the client in its subnets-to-sites association data.
        If the client's IP address is found in one of the subnets, the Domain Controller returns the relevant site information to the client, and the client use this information to contact a Domain Controller within its site.

        When the client's IP address cannot be found, the client may connect to any Domain Controller, including ones that are physically far away from him.
        This can result in communication over slow WAN links, which will have a direct impact on the client login process.

        If you suspect that you have missing subnets in your Active Directory environment, you can look for event ID 5807 (Source: NETLOGON) within your Domain Controllers.
        The event is created when there are connections from clients whose IP addresses don't map to any of the existing AD sites.
        Those clients, along with their names and IP address, are listed by default in C:\Windows\debug\netlogon.log.

        You can use the following PowerShell script to create a report of all clients which are not mapped to any AD sites, based on the Netlogon.log files from all of the Domain Controllers within the domain.

        The script output should look similar to this:

        Bottom Line: The association of subnets to Active Directory sites has a significant impact on the client machines performance. Missing this association may lead to poor performance and unexpected login times.

        Do It Right: Work together with your IT network team to make sure any new scope is covered and has a corresponded subnet that associated to an Active Directory site.

        So... this was the last part of the 'Most Common Mistakes in Active Directory and Domain Services' series.
        Hope you enjoyed reading these blog posts and learned a thing or two.

        Time zone issues when copying SCOM alerts

        $
        0
        0

        Background

        When trying to copy-paste (ctrl+c, ctrl+v) alerts from the SCOM console to an Excel worksheet or just a text file, we noticed that the Created field values where different from the ones displayed in the console. There was a two-hour difference.

        1

        2

        As it turns out, the server was configured in a GMT+2 time zone, and the values got pasted in UTC. Hence the two-hour difference.

        Solution

        On each of the servers/workstations with SCOM console installed where you want to fix this, simply create the following registry key and value:

        Key: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Console\ViewCopySettings\

        Value: InLocalTime (DWord)

        Data: 1

        (Where 1 means that you want to have the values in your local time, and 0 means the default behaviour of UTC)

        3



        Conclusion

        With some digging done by me and my colleagues using Procmon we where able to find out that the copy mechanism is trying to reach a non existing registry key and value.

        So.. “When in doubt, run process monitor” – Mark Russinovich.


        Hope this helps,

        Oren Salzberg.

        Field Notes: The case of buried Active Directory Account Management Security Audit Policy events

        $
        0
        0

        Security auditing is one of the most powerful tools that you can use to maintain the integrity of your system.  As part of your overall security strategy, you should determine the level of auditing that is appropriate for your environment.  Auditing should identify attacks (successful or not) that pose a threat to your network, and attacks against resources that you have determined to be valuable in your risk assessment.

        In this blog post, I discuss a common security audit policy configuration I come across in a number of environments (with special focus on Account Management).  I also highlight the difference between basic and advanced security audit policy settings.  Lastly, I point you to where recommendations that can help you fine-tune these policies can be obtained.

        Background

        It may appear that events relating to user account management activities in Active Directory (AD) are not logged in the security event logs on domain controllers (DC).  This is an example of a view on one DC:

        Cluttered Security Event Log

        Here we see a lot of events from the Filtering Platform Packet Drop and Filtering Platform Connection subcategories - the image shows ten of these within the same second! 

        We see the following events on the same log about two minutes later (Directory Service Replication):

        Cluttered Security Event Log

        It can also be seen that there was an event relating to a successful Directory Service Access (DS Access) activity, but this is only one out of quite a bit!

        Running the following command in an elevated prompt helps in figuring out what triggers these events:

         auditpol /get /category:"DS Access,Object Access" 

        The output below reveals that every subcategory in both the Policy Change and DS Access categories is set to capture success and failure events.

        Auditpol Output

        Note: running auditpol unelevated will result in the following error:

        Error 0x00000522 occurred:

        A required privilege is not held by the client.

        To complete the picture, this is what it looked like in the Group Policy Editor:

        Basic Audit Policy Settings Group Policy Management Editor

        Do we need all these security audit events?  Let us look at what some of the recommendations are.


        Security auditing recommendations

        Guidance from tools such as the Security Compliance Manager (SCM) states that if audit settings are not configured, it can be difficult or impossible to determine what occurred during a security incident.  However, if audit settings are configured so that events are generated for all activities the security log will be filled with data and hard to use.  We need a good balance. 

        Let us take a closer look at these subcategories:

        Filtering Platform Packet Drop

        This subcategory reports when packets are dropped by Windows Filtering Platform (WFP).  These events can be very high in volume.  The default and recommended setting is no auditing on AD domain controllers.

        Filtering Platform Connection

        This subcategory reports when connections are allowed or blocked by WFP.  These events can be high in volume.  The default and recommended setting is no auditing on AD domain controllers.

        Directory Service Replication

        This subcategory reports when replication between two domain controllers begins and ends.  The default and recommended setting is no auditing on AD domain controllers.

        These descriptions and recommendations are from SCM but there is also the Policy Analyzer, which is part of the Microsoft Security Compliance Toolkit, you can look at using for guidance.  There’s also this document if you do not have any of these tools installed.

        Tuning audit settings

        Turning on everything – success and failures, is obviously not inline with security audit policy recommendations.  If you have an environment that was built on Windows Server 2008 R2 or above, the advanced audit policy configuration is available to use in Group Policy.

        Important

        Basic versus Advanced

        Reference: https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd692792(v=ws.10)

        If you already have settings configured in the basic audit policy and want to start leveraging the advanced audit policy in order to benefit from granularity offered by the latter, you need to carefully plan for the migration.

        Getting Started with Advanced Audit Policy Configuration

        In case you are wondering what I mean by granularity, see a comparison of the two below.

        Basic Audit Policy Settings

        In this example, I set the audit directory service access (DS Access) category to success:

        Example of Basic Audit Policy Settings

        Notice that all subcategories are affected as there is no granularity offered here (every subcategory is set to success):

        Outcome of Basic Audit Policy Setting

        Side note: take a look back at the Group Policy Management Editor window focusing on Audit Policy while we are here.  Notice that audit policy change is set to no auditing instead of not defined.  Here is the difference between the two:

        • Not defined means that group policy does not enforce this setting – Windows (Server) will assume the default setting
        • No auditing means that auditing is turned off – see example below

        No Auditing

        Advanced Audit Policy Settings

        On the other hand, the advanced security audit policy does offer fine-grained control.  The example below demonstrates granularity that could be realized when using the advanced security audit policies:

        Subcategory Setting
        Audit Detailed Directory Service Replication No Auditing
        Audit Directory Service Access Success and Failure
        Audit Directory Service Changes Success
        Audit Directory Service Replication No Auditing

        Example of Advanced Audit Policy Settings

        The output of auditpol confirms expected the expected result:

        Outcome of Advanced Audit Policy Settings

        The outcome

        After turning off basic security audit policies and implementing the advanced settings based on the recommendations shared above, the security event logs start to make sense since a lot of the “noise” has been removed.  We start seeing desired events logged in the security log as depicted below:

        Neat Security Event Log

        Keep in mind that these events are local to each DC, and that the event logs are configured to overwrite events as needed (oldest events first) by default.  Solutions such as System Center Operations Manager Audit Collection Services can help capture, centralize and archive these events.

        Till next time…


        Field Notes: The case of the failed SQL Server Failover Cluster Instance – Binaries Disks Added to Cluster

        $
        0
        0

        I paid a customer a visit a while ago and was requested to assist with a SQL Server Failover Cluster issue they were experiencing.  They had internally transferred the case from the SQL team to folks who look after the Windows Server platform as they could not pick up anything relating to SQL during initial troubleshooting efforts.

        My aim in this post is to:

        • explain what the issue was (adding disks meant to be local storage to the cluster)
        • provide a little bit of context on cluster disks and asymmetric storage configuration
        • discuss how the issue was resolved by removing the disks from cluster

        Issue definition and scope

        An attempt to move the SQL Server role/group from one node to another in a 2-node Failover Cluster failed.  This is what they observed:

        Failed SQL Server Group

        From the image above, it can be seen that all disk resources are online.  Would you suspect that storage is involved at this stage?  In cluster events, there was the standard Event ID 1069 confirming that the cluster resource ‘SQL Server’ of type ‘SQL Server’ in clustered role ‘SQL Server (MSSQLSERVER)’ failed.  Additionally, this is what was in the cluster log – “failed to start service with error 2”:

        Cluster Log

        Error code 2 means that the system cannot find the file specified:

        Net HelpMsg

        A little bit of digging around reveals that this is the image path we are failing to get to:

        Registry value

        Now that we have all this information, let’s look at how you would resolve this specific issue we were facing.  Before that however, I would like to provide a bit of context relating to cluster disks, especially on Asymmetric Storage Configuration.

        Context

        Consider a 2 node SQL Server Failover Cluster Instance running on a Windows Server 2012 R2 Failover Cluster with the following disk configuration:

        • C drive for the Operating System – each of the nodes has a direct attached disk
        • D drive for SQL binaries – each of the nodes has a dedicated “local” drive, presented from a Storage Area Network (SAN)
        • All the other drives required for SQL are shared drives presented from the SAN

        Disks in Server Manager

        Note: The 20 GB drive is presented from the SAN and is not added to the cluster at this stage.

        I used Hyper-V Virtual Machines to reproduce this issue in a lab environment.  For the SAN part, I used the iSCSI target that is built-in to Windows Server.

         

        Asymmetric Storage Configuration

        A feature enhancement in Failover Clustering for Windows Server 2012 and Windows Server 2012 R2 is that it supports an Asymmetric Storage Configuration.  In Windows Server 2012 a disk is considered clusterable if it is presented to one or more nodes, and is not the boot / system disk, or contain a page file.  https://support.microsoft.com/en-us/help/2813005/local-sas-disks-getting-added-in-windows-server-2012-failover-cluster

         

        What happens when you Add Disks to Cluster?

        Let us first take a look at the disks node in Failover Cluster Manager (FCM) before adding the disks.

        Disks in Failover Cluster Manager

        Here’s what we have (ordered by the disk number column):

        • The Failover Cluster Witness disk (1 GB)
        • SQL Data (50 GB)
        • SQL Logs (10 GB)
        • Other Stuff (5 GB)

        The following window is presented when an attempt to add disks to a cluster operation is performed in FCM:

        Add Disks to a Cluster

        Both disks are added as cluster disks when one clicks OK at this stage.  After adding the disks (which are not presented to both nodes), we see the following:

        Disks in Failover Cluster Manager

        Nothing changed regarding the 4 disks we have already seen in FCM, and the two “local” disks are now included:

        • Cluster Disk 1 is online on node PTA-SQL11
        • Cluster Disk 2 is offline on node PTA-SQL11 as it is not physically connected to the node

        At this stage, everything still works fine as the SQL binaries volume is still available on this node.  Note that the “Available Storage” group is running on PTA-SQL11.

         

        What happens when you move the Available Storage group?

        Move Available Storage

        Let’s take a look at FCM again:

        Disks in Failover Cluster Manager

        Now we see that:

        • Cluster Disk 1 is now offline
        • Cluster Disk 2 is now online
        • The owner of the “Available Storage” group is now PTA-SQL12

        This means that PTA-SQL12 can see the SQL binaries volume and PTA-SQL11 cannot, which causes downtime.  Moving the SQL group to PTA-SQL12 works just fine as the SQL binaries drive is online on that node.  You may also want to ensure that the resources are configured to automatically recover from failures.  Below is an example of default configuration on a resource:

        Resource Properties

         

        Process People and Technology

        It may appear that the technology is at fault here, but the Failover Cluster service does its bit to protect us from shooting ourselves in the foot, and here are some examples:

        Validation

        The Failover Cluster validation report does a good job in letting you know that disks are only visible from one node.  By the way, there’s also good information here on what’s considered for a disk to be clustered.

        Validation Report

        A warning is more like a “proceed with caution” when looking at a validation report.  Failures/errors mean that the solution does not meet requirements for Microsoft support.  Also be careful when validating storage as services may be taken offline.

         

        Logic

        In the following snippet from the cluster log, we see an example of the Failover Cluster Resource Control Manger (RCM) prevent the move of the “Available Storage” group to prevent downtime.

        Cluster Log

        Back online and way forward

        To get the service up and running again, we had to remove both Disk 1 and Disk 2 as cluster disks and make them “local” drives again.  The cause was that an administrator had added disks that were not meant to be part of the cluster as clustered disks.

        Disks need to be made online from a tool such as the Disk Management console as they are automatically placed in an offline state to avoid possible issues that may be caused by having a non-clustered disk online on two or more nodes in a shared disk scenario.

        I got curious after this and reached out to folks who specialize in SQL server to get their views on whether the SQL binaries drive should or should not be shared.  One of the strong views is to keep them as a non-shared (non-clustered) drives, especially for cases on SQL patching.  What happens if SQL patching fails in a shared drive scenario for example?

        Anyway, it would be great to hear from you through comments.

        Till next time…

        Step by step MIM PAM setup and evaluation Guide – Part 3

        $
        0
        0

        This is third part of the series. In the previous posts we have prepared test environment for PAM deployment, created and configured all needed service accounts, installed SQL Server and prepared PIM server for further installation. Now we have two forests – prod.contoso.com and priv.contoso.com. In PROD we have set up Certificate services, Exchange server, ADFS services and configured two test applications – one is using Windows Integrated Authentication and the second Claim based Authentication. In PRIV forest we have PAM server prepared for MIM/PAM deployment with SQL server ready.

        Series:

        Installing PAM Server

          1. Install SharePoint 2016
            1. a. Download SharePoint 2016 Prerequisites

        Please download following binaries into one selected folder (for example C:SetupSoftwareSP2016-Prerequisites) on the PRIV-PAM server

        Cumulative Update 7 (KB3092423) for Microsoft AppFabric 1.1 for Windows Server [https://www.microsoft.com/en-us/download/details.aspx?id=49171]

        Microsoft Identity Extensions [http://go.microsoft.com/fwlink/?LinkID=252368]

        Microsoft ODBC Driver 11 for SQL Server [http://www.microsoft.com/en-us/download/details.aspx?id=36434]

        Microsoft Information Protection and Control Client [http://go.microsoft.com/fwlink/?LinkID=528177]

        Microsoft SQL Server 2012 Native Client [http://go.microsoft.com/fwlink/?LinkID=239648&clcid=0x409]

        Microsoft Sync Framework Runtime v1.0 SP1 (x64) [http://www.microsoft.com/en-us/download/details.aspx?id=17616] – Open SyncSetup_en.x64.zip and extract to this folder only Synchronization.msi

        Visual C++ Redistributable Package for Visual Studio 2013 [http://www.microsoft.com/en-us/download/details.aspx?id=40784]

        Visual C++ Redistributable for Visual Studio 2015 [https://www.microsoft.com/en-us/download/details.aspx?id=48145]

        Microsoft WCF Data Services 5.0 [http://www.microsoft.com/en-us/download/details.aspx?id=29306]

        Windows Server AppFabric 1.1 [http://www.microsoft.com/en-us/download/details.aspx?id=27115]

        At the end You will need to have in the selected folder following binaries:

              • AppFabric-KB3092423-x64-ENU.exe
              • MicrosoftIdentityExtensions-64.msi
              • msodbcsql.msi
              • setup_msipc_x64.msi
              • sqlncli.msi
              • Synchronization.msi
              • vcredist_x64.exe
              • vc_redist.x64.exe
              • WcfDataServices.exe
              • WindowsServerAppFabricSetup_x64.exe
            1. Install SharePoint Prerequisites

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        Open PowerShell ISE as an Admin and paste following script:

        $spPrereqBinaries = ‘C:SetupSoftwareSP2016-Prerequisites’

        $sharePointBinaries = ‘C:SetupSoftwareSharePoint2016’

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host “[OK] $Command was successful” }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host “[Warning]Please rerun script after restart of the server”

        Restart-Computer -Confirm

        }

        else { Write-Host “[Error] Failed to run $Command” }

        }

        catch {

        Write-Host “[Error] Failed to run $Command”

        Write-Host (“`t`t`t{0}” -f $_.Exception.Message)

        }

        }

        }

        $arguments = “/sqlncli:`”$spPrereqBinariessqlncli.msi`” ”

        $arguments += “/idfx11:`”$spPrereqBinariesMicrosoftIdentityExtensions-64.msi`” ”

        $arguments += “/sync:`”$spPrereqBinariesSynchronization.msi`” ”

        $arguments += “/appfabric:`”$spPrereqBinariesWindowsServerAppFabricSetup_x64.exe`” ”

        $arguments += “/kb3092423:`”$spPrereqBinariesAppFabric-KB3092423-x64-ENU.exe`” ”

        $arguments += “/msipcclient:`”$spPrereqBinariessetup_msipc_x64.msi`” ”

        $arguments += “/wcfdataservices56:`”$spPrereqBinariesWcfDataServices.exe`” ”

        $arguments += “/odbc:`”$spPrereqBinariesmsodbcsql.msi`” ”

        $arguments += “/msvcrt11:`”$spPrereqBinariesvc_redist.x64.exe`” ”

        $arguments += “/msvcrt14:`”$spPrereqBinariesvcredist_x64.exe`””

        Run-SystemCommand -Command “$sharePointBinariesprerequisiteinstaller.exe” -Arguments $arguments -RestartIfNecessary $true -RestartResult 3010

        Replace $spPrereqBinaries value with path where your prerequisite binaries are located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Result should confirm successful installation. In case server restarts, after restart run again previous command

        Repeat until restart is not needed.

        Restart PRIV-PAM server.

            1. Create SharePoint Server 2016 Installation configuration file

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        In the Notepad paste following:

        <Configuration>

        <Package Id=”sts”>

        <Setting Id=”LAUNCHEDFROMSETUPSTS” Value=”Yes” />

        </Package>

        <Package Id=”spswfe”>

        <Setting Id=”SETUPCALLED” Value=”1″ />

        </Package>

        <Logging Type=”verbose” Path=”%temp%” Template=”SharePoint Server Setup(*).log” />

        <PIDKEY Value=”RTNGH-MQRV6-M3BWQ-DB748-VH7DM” />

        <Display Level=”none” CompletionNotice=”no” />

        <Setting Id=”SERVERROLE” Value=”SINGLESERVER” />

        <Setting Id=”USINGUIINSTALLMODE” Value=”1″ />

        <Setting Id=”SETUP_REBOOT” Value=”Never” />

        <Setting Id=”SETUPTYPE” Value=”CLEAN_INSTALL” />

        </Configuration>

        In the configuration I have added SharePoint 2016 evaluation key for Standard version. You are free to replace key with your license key

        Save file as config.xml to chosen location.

            1. Install SharePoint

        Open PowerShell ISE as an Admin and paste following script:

        $sharePointBinaries = ‘C:SetupSoftwareSharePoint2016’

        $configPath = ‘C:Setup’

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host “[OK] $Command was successful” }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host “[Warning]Please rerun script after restart of the server”

        Restart-Computer -Confirm

        }

        else { Write-Host “[Error] Failed to run $Command” }

        }

        catch {

        Write-Host “[Error] Failed to run $Command”

        Write-Host (“`t`t`t{0}” -f $_.Exception.Message)

        }

        }

        }

        Run-SystemCommand -Command “$sharePointBinariessetup.exe” -Arguments “/config $configPathconfig.xml” -RestartIfNecessary $true -RestartResult 30030

        Replace $ configPath value with path where config file created in previous step is located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Wait until script finishes – it won’t display installation progress.Result should confirm successful installation.

          1. Create SharePoint Site
            1. Request, issue and install SSL certificate

        Open PowerShell ISE as an Admin and paste following script:

        $file = @”

        [NewRequest]

        Subject = “CN=pamportal.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog”

        MachineKeySet = TRUE

        KeyLength = 2048

        KeySpec=1

        Exportable = TRUE

        RequestType = PKCS10

        [RequestAttributes]

        CertificateTemplate = “WebServerV2”

        “@

        Set-Content C:Setupcertreq.inf $file

        Invoke-Expression -Command “certreq -new C:Setupcertreq.inf C:Setupcertreq.req”

        (Replace C:Setup with folder of your choice – in this folder we will save request file)

        Run above script and respond to message boxes prompt “Template not found. Do you wish to continue anyway?” with “Yes”.

        Copy C:Setupcertreq.req to corresponding folder on PROD-DC server.

        Log on to PROD-DC as an administrator

        Open command prompt as an admin.

        Run following command:

        certreq -submit C:Setupcertreq.req C:Setuppamportal.contoso.com.cer

        Here C:Setup is folder where certificate request file is placed – modify path according to your location.

        Confirm CA when prompted

        Now we have in C:Setup certificate file C:Setuppamportal.contoso.com.cer. Copy that file back to PRIV-PAM server.

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        Run PowerShell as Admin and execute following:

        $cert = Import-Certificate -CertStoreLocation Cert:LocalMachinemy -FilePath C:Setuppamportal.contoso.com.cer

        $guid = [guid]::NewGuid().ToString(“B”)

        $tPrint = $cert.Thumbprint

        netsh http add sslcert hostnameport=”pamportal.contoso.com:443″ certhash=$tPrint certstorename=MY appid=”$guid”

            1. Run script to create SharePoint Site where PAM Portal will be placed.

        Open PowerShell ISE as an Admin and paste following script:

        $Passphrase = ‘Y0vW8sDXktY29’

        $password = ‘P@$$w0rd’

        Add-PSSnapin Microsoft.SharePoint.PowerShell

        #

        #Initialize values required for the script

        $SecPhassphrase = (ConvertTo-SecureString -String $Passphrase -AsPlainText -force)

        $FarmAdminUser = ‘PRIVsvc_PAMFarmWSS’

        $svcMIMPool = ‘PRIVsvc_PAMAppPool’

        #

        #Create new configuration database

        $secstr = New-Object -TypeName System.Security.SecureString

        $password.ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}

        $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $FarmAdminUser, $secstr

        New-SPConfigurationDatabase -DatabaseName ‘MIM_SPS_Config’ -DatabaseServer ‘SPSSQL’ -AdministrationContentDatabaseName ‘MIM_SPS_Admin_Content’ -Passphrase $SecPhassphrase -FarmCredentials $cred -LocalServerRole WebFrontEnd

        #

        #Create new Central Administration site

        New-SPCentralAdministration -Port ‘2016’ -WindowsAuthProvider “NTLM”

        #

        #Perform the config wizard tasks

        #Install Help Collections

        Install-SPHelpCollection -All

        #Initialize security

        Initialize-SPResourceSecurity

        #Install services

        Install-SPService

        #Register features

        Install-SPFeature -AllExistingFeatures

        #Install Application Content

        Install-SPApplicationContent

        #

        #Add managed account for Application Pool

        $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $svcMIMPool, $secstr

        New-SPManagedAccount -Credential $cred

        #

        #Create new ApplicationPool

        New-SPServiceApplicationPool -Name PAMSPSPool -Account $svcMIMPool

        #

        #Create new Web Application.

        #This creates a Web application that uses classic mode windows authentication.

        #Claim-based authentication is not supported by MIM

        New-SPWebApplication -Name ‘PAM Portal’ -Url “https://pamportal.contoso.com&#8221; -Port 443 -HostHeader ‘pamportal.contoso.com’ -SecureSocketsLayer:$true -ApplicationPool “PAMSPSPool” -ApplicationPoolAccount (Get-SPManagedAccount $($svcMIMPool)) -AuthenticationMethod “Kerberos” -DatabaseName “PAM_SPS_Content”

        #

        #Create new SP Site

        New-SPSite -Name ‘PAM Portal’ -Url “https://pamportal.contoso.com&#8221; -CompatibilityLevel 15 -Template “STS#0” -OwnerAlias $FarmAdminUser

        #

        #Disable server-side view state. Required by MIM

        $contentService = [Microsoft.SharePoint.Administration.SPWebService]::ContentService

        $contentService.ViewStateOnServer = $false

        $contentService.Update()

        #

        #configure SSL

        Set-WebBinding -name “PAM Portal” -BindingInformation “:443:pamportal.contoso.com” -PropertyName “SslFlags” -Value 1

        #Add Secondary Site Collection Administrator

        Set-SPSite -Identity “https://pamportal.contoso.com&#8221; -SecondaryOwnerAlias “PAMAdmin”

          1. Install MIM Service, MIM Portal and PAM

        Open Command prompt as an Admin and run following command

        msiexec.exe /passive /i “C:SetupSoftwareMIM2016SP1RTMService and PortalService and Portal.msi” /norestart /L*v C:SetupPAM.LOG ADDLOCAL=”CommonServices,WebPortals,PAMServices” SQMOPTINSETTING=”1″ SERVICEADDRESS=”pamsvc.contoso.com” FIREWALL_CONF=”1″ SHAREPOINT_URL=”https://pamportal.contoso.com&#8221; SHAREPOINTUSERS_CONF=”1″ SQLSERVER_SERVER=”SVCSQL” SQLSERVER_DATABASE=”FIMService” EXISTINGDATABASE=”0″ MAIL_SERVER=”mail.contoso.com” MAIL_SERVER_USE_SSL=”1″ MAIL_SERVER_IS_EXCHANGE=”1″ POLL_EXCHANGE_ENABLED=”1″ SERVICE_ACCOUNT_NAME=”svc_PAMWs” SERVICE_ACCOUNT_PASSWORD=”P@$$w0rd” SERVICE_ACCOUNT_DOMAIN=”PRIV” SERVICE_ACCOUNT_EMAIL=”svc_PAMWs@prod.contoso.com” REQUIRE_REGISTRATION_INFO=”0″ REQUIRE_RESET_INFO=”0″ MIMPAM_REST_API_PORT=”8086″ PAM_MONITORING_SERVICE_ACCOUNT_DOMAIN=”PRIV” PAM_MONITORING_SERVICE_ACCOUNT_NAME=”svc_PAMMonitor” PAM_MONITORING_SERVICE_ACCOUNT_PASSWORD=”P@$$w0rd” PAM_COMPONENT_SERVICE_ACCOUNT_DOMAIN=”PRIV” PAM_COMPONENT_SERVICE_ACCOUNT_NAME=”svc_PAMComponent” PAM_COMPONENT_SERVICE_ACCOUNT_PASSWORD=”P@$$w0rd” PAM_REST_API_APPPOOL_ACCOUNT_DOMAIN=”PRIV” PAM_REST_API_APPPOOL_ACCOUNT_NAME=”svc_PAMAppPool” PAM_REST_API_APPPOOL_ACCOUNT_PASSWORD=”P@$$w0rd” REGISTRATION_PORTAL_URL=”http://localhost&#8221; SYNCHRONIZATION_SERVER_ACCOUNT=”PRIVsvc_MIMMA” SHAREPOINTTIMEOUT=”600″

        (“C:SetupSoftwareMIM2016SP1RTMService and PortalService and Portal.msi” replace with path to Service and Portal installation path, C:SetupPAM.LOG replace with path where installation log will be placed)

        When installation finishes open C:SetupPAM.LOG file in Notepad and goto the end of the file. You should find line

        … Product: Microsoft Identity Manager Service and Portal — Installation completed successfully.

        Open Internet Explorer and navigate to https://pamportal.contoso.com/IdentityManagement

        Portal should be loaded:

        clip_image002

        Restart the PRIV-PAM server

          1. Configure SSL for pamapi.contoso.com
            1. Request, issue and install SSL certificate for the portal

        Open PowerShell ISE as an Admin and paste following script:

        $file = @”

        [NewRequest]

        Subject = “CN=pamapi.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog”

        MachineKeySet = TRUE

        KeyLength = 2048

        KeySpec=1

        Exportable = TRUE

        RequestType = PKCS10

        [RequestAttributes]

        CertificateTemplate = “WebServerV2”

        “@

        Set-Content C:Setupcertreq.inf $file

        Invoke-Expression -Command “certreq -new C:Setupcertreq.inf C:Setupcertreq.req”

        (Replace C:Setup with folder of your choice – in this folder we will save request file)

        Run above script and respond to message boxes with “OK”.

        Copy C:Setupcertreq.req to corresponding folder on PROD-DC server.

        Log on to PROD-DC as an administrator

        Open command prompt as an admin.

        Run following command:

        certreq -submit C:Setupcertreq.req C:Setuppamapi.contoso.com.cer

        Here C:Setup is folder where certificate request file is placed – modify path according to your location.

        Confirm CA when prompted

        Now we have in C:Setup certificate file C:Setuppamapi.contoso.com.cer. Copy that file back to PRIV-PAM server.

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        Run PowerShell as Admin and execute following:

        $cert = Import-Certificate -CertStoreLocation Cert:LocalMachinemy -FilePath C:Setuppamapi.contoso.com.cer

        $guid = [guid]::NewGuid().ToString(“B”)

        $tPrint = $cert.Thumbprint

        netsh http add sslcert hostnameport=”pamapi.contoso.com:8086″ certhash=$tPrint certstorename=MY appid=”$guid”

            1. Configure SSL on pamapi.contoso.com

        Run PowerShell as Admin and execute following:

        Set-WebBinding -Name ‘MIM Privileged Access Management API’ -BindingInformation “:8086:” -PropertyName Port -Value 8087

        New-WebBinding -Name “MIM Privileged Access Management API” -Port 8086 -Protocol https -HostHeader “pamapi.contoso.com” -SslFlags 1

        Remove-WebBinding -Name “MIM Privileged Access Management API” -BindingInformation “:8087:”

        Conclusion of Part 3

        Now we are ready for the Part 4 – Installing PAM Example portal.

        In this exercise we went step by step through PAM Portal set up. If you carefully followed all steps you have healthy and well configured PAM deployment.

        We didn’t spent time on Portal customization and branding, what I leave to you for the future.

        In the Part 4 we will set up PAM Example Portal.

        Until then

        Have a great week

        Disclaimer – All scripts and reports are provided ‘AS IS’

        This sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.

        Most Common Mistakes in Active Directory and Domain Services – Part 3

        $
        0
        0

        This blog post is the third (and last) part in the ‘Most Common Mistakes in Active Directory In Domain Services” series.
        In the previous parts, we covered some major mistake like configuring multiple password policies using GPO and keeping FFL/DFL at a lower version.
        The 3’rd part of the series is no exception. we’ll go on and review three additional mistakes and summarize this series.

        Series:

        Mistake #7: Installing Additional Server Roles and Applications on a Domain Controller

        When I review a customer’s Active Directory environment, I often find additional Windows Server roles (other than the default ADDS and DNS roles) installed on one or more of the Domain Controllers.

        This can be any role – from RDS Licensing, through Certificate Authority and up to DHCP Server. Beside Windows Server roles, I also find special applications and features running on the Domain Controllers, like KMS (Key Management Service) host for volume activation, or Azure AD Connect for integrating on-premises directories with Azure AD.

        There is a wide variety of roles and applications which administrators install on the Domain Controllers, but there is one thing common to all of them: Domain Controllers are NOT the place for them.

        By default, any Domain Controller in a domain provides the same functionality and features as the others, what makes the Active Directory Domain Services not be affected if one Domain Controller becomes unavailable.
        Even in a case where the Domain Controller holding the FSMO roles becomes unavailable, the Domain Services will continue to work as expected for most scenarios (at least in the short-term).

        When you install additional roles and applications on your Domain Controllers, two problems are raised:

        1. Domain Controllers with additional roles and features become unique and different compares to other Domain Controllers. If any of these Domain Controllers will be turned off or get damaged, its roles and features might be affected and become unavailable. This, in fact, creates a dependency between ADDS and other roles and affect the redundancy of the Active Directory Domain Services.
        2. Upgrading your Active Directory environment becomes a much more complicated task. A DHCP Server or a Certificate Authority roles installed on your Domain Controllers will enforce you to deal with them first, and only then move forward and upgrade the Active Directory itself. This complexity might also affect other tasks like restoring a Domain Controller or even put a Domain Controller into maintenance.

        This is why putting additional roles and applications on your Domain Controllers is not recommended for most cases.
        You can use the following PowerShell script to easily get a report with your Domain Controllers installed roles. Pay attention that this script is working only for Windows Server 2012 and above. For Windows Server 2008, you can use WMI Query.

         

        Bottom Line: Domain Controllers are designed to provide directory services for your users – allowing access to domain resources and respond to security authentication requests.
        Mixing Active Directory Domain Services with other roles and applications creates a dependency between the two, affect Domain Controller performance and make the administrative tasks a much more complicated.

        Do It Right: Use Domain Controllers for Active Directory Domain Services only, and install additional roles (let it be KMS or a DHCP server) on different servers.

        Mistake #8: Deploying Domain Controllers as a Windows Server With Desktop Experience 

        When you install Windows Server, you can choose between two installation options:

        • Windows Server with Desktop Experience – This is the standard user interface, including desktop, start menu, etc.
        • Windows Server – This is the Server Core, which leaving the standard user interface in favor of command line.

        Although Windows Server Core has some major advantages compares to Desktop Experience, most administrators are still choosing to go with the full user interface, even for the most convenient and supported server roles like Active Directory Domain Services, Active Directory Certificate Services, and DHCP Server.

        Windows Core is not a new option, and it has been here since Windows Server 2008R2. It works great for the supported Windows roles and has some great advantages compares to the Windows Server with Desktop Experience. Here are the most significant ones:

        • Reduce potential attack surface and lower the chance for user mistakes – Windows Server Core reduces the potential attack surface by eliminating binaries and features which does not require for the supporting roles (Active Directory Domain Services in our case).
          For example, the Explorer shell is not installed, which of curse reduces the risks and exploits that can be manipulated and used to attack the server.
          Other than that, when customers are using Windows Server with Desktop Experience for Active Directory Domain Services, they are also usually performing administrative tasks directly on their Domain Controllers using Remote Desktop.
          This is a very bad habit as it may have a significant impact over the Domain Controllers performance and functionality. It might also cause a Domain Controller to become unavailable by accidentally turn it off or running a heavy PowerShell script which drains the server’s memory.
        • Improve administrative skills while still be able to use the GUI tools – by choosing Windows Server Core, you’ll probably get the chance to use some PowerShell cmdlets and improve your PowerShell and scripting skills.
          Some customers think that this is the only way to manage and administer the server and its role, but that’s not true.
          Alongside the Command Line options, you’ll find some useful remote management tools, including Windows Admin Center, Server Manager, and Remote Server Administration Tools (RSAT).
          In our case, the RSAT includes all the Active Directory Administrative tool like the Active Directory Users and Computers (dsa.msc) and the ADSI Editor (adsiedit.msc).
          It also important to be familiar with the ‘Server Core App Compatibility Feature on Demand‘ (FOD), which can be used to increase Windows Server Core 2019 compatibility with other applications and to provide administrative tools for troubleshooting scenarios.
          My recommendation is to deploy an administrative server for managing all domain services roles, including Active Directory Domain Services, DNS, DHCP, Active Directory Certificate Services, Volume Activation, and others.
        • Other advantages like reducing disk space and memory usage are also here, but they, by themselves, are not the reason for using Windows Server Core.

        You should be aware that unlike Windows Server 2o12R2, you cannot convert Windows Server 2016/2019 between Server Core and Server with Desktop Experience after installation.

        Bottom Line: Windows Server Core is not a compromise. For the supported Windows Server roles, it is the official recommendation by Microsoft. Using Windows Server with Full Desktop Experience increases the chances that your Domain Controllers will get messy and will be used for administration tasks rather than providing domain services.

        Do It Right: Install your Domain Controllers as a Windows Server Core, and use remote management tools to administer your domain resources and configuration. Consider deploying one of your Domain Controller as a Windows Server with Full Desktop Experience for forest recovery scenarios.

        Mistake #9: Use Subnets Without Mapping them to Active Directory sites

        Active Directory uses sites for many purposes. One of them is to inform clients about Domain Controllers available within the closest site as the client.

        For doing that, each site is associated with the relevant subnets, which correspond to the range of IP addresses in the site. You can use Active Directory Sites and Services to manage and associate your subnets.

        When a Windows domain client is looking for the nearest Domain Controller (what’s known as the DC Locator process), the Active Directory (or more precisely, the NetLogon in one of the Domain Controllers) is looking for the IP address of the client in its subnets-to-sites association data.
        If the client’s IP address is found in one of the subnets, the Domain Controller returns the relevant site information to the client, and the client use this information to contact a Domain Controller within its site.

        When the client’s IP address cannot be found, the client may connect to any Domain Controller, including ones that are physically far away from him.
        This can result in communication over slow WAN links, which will have a direct impact on the client login process.

        If you suspect that you have missing subnets in your Active Directory environment, you can look for event ID 5807 (Source: NETLOGON) within your Domain Controllers.
        The event is created when there are connections from clients whose IP addresses don’t map to any of the existing AD sites.
        Those clients, along with their names and IP address, are listed by default in C:Windowsdebugnetlogon.log.

        You can use the following PowerShell script to create a report of all clients which are not mapped to any AD sites, based on the Netlogon.log files from all of the Domain Controllers within the domain.

        The script output should look similar to this:

        Bottom Line: The association of subnets to Active Directory sites has a significant impact on the client machines performance. Missing this association may lead to poor performance and unexpected login times.

        Do It Right: Work together with your IT network team to make sure any new scope is covered and has a corresponded subnet that associated to an Active Directory site.

        So… this was the last part of the ‘Most Common Mistakes in Active Directory and Domain Services’ series.
        Hope you enjoyed reading these blog posts and learned a thing or two.

        Time zone issues when copying SCOM alerts

        $
        0
        0

        Background

        When trying to copy-paste (ctrl+c, ctrl+v) alerts from the SCOM console to an Excel worksheet or just a text file, we noticed that the Created field values where different from the ones displayed in the console. There was a two-hour difference.

        1

        2

        As it turns out, the server was configured in a GMT+2 time zone, and the values got pasted in UTC. Hence the two-hour difference.

        Solution

        On each of the servers/workstations with SCOM console installed where you want to fix this, simply create the following registry key and value:

        Key: HKEY_CURRENT_USERSOFTWAREMicrosoftMicrosoft Operations Manager3.0ConsoleViewCopySettings

        Value: InLocalTime (DWord)

        Data: 1

        (Where 1 means that you want to have the values in your local time, and 0 means the default behaviour of UTC)

        3

         

        Conclusion

        With some digging done by me and my colleagues using Procmon we where able to find out that the copy mechanism is trying to reach a non existing registry key and value.

        So.. “When in doubt, run process monitor” – Mark Russinovich.

         

        Hope this helps,

        Oren Salzberg.

        Field Notes: The case of buried Active Directory Account Management Security Audit Policy events

        $
        0
        0

        Security auditing is one of the most powerful tools that you can use to maintain the integrity of your system.  As part of your overall security strategy, you should determine the level of auditing that is appropriate for your environment.  Auditing should identify attacks (successful or not) that pose a threat to your network, and attacks against resources that you have determined to be valuable in your risk assessment.

        In this blog post, I discuss a common security audit policy configuration I come across in a number of environments (with special focus on Account Management).  I also highlight the difference between basic and advanced security audit policy settings.  Lastly, I point you to where recommendations that can help you fine-tune these policies can be obtained.

        Background

        It may appear that events relating to user account management activities in Active Directory (AD) are not logged in the security event logs on domain controllers (DC).  This is an example of a view on one DC:

        Cluttered Security Event Log

        Here we see a lot of events from the Filtering Platform Packet Drop and Filtering Platform Connection subcategories – the image shows ten of these within the same second!

        We see the following events on the same log about two minutes later (Directory Service Replication):

        Cluttered Security Event Log

        It can also be seen that there was an event relating to a successful Directory Service Access (DS Access) activity, but this is only one out of quite a bit!

        Running the following command in an elevated prompt helps in figuring out what triggers these events:

         auditpol /get /category:"DS Access,Object Access" 

        The output below reveals that every subcategory in both the Policy Change and DS Access categories is set to capture success and failure events.

        Auditpol Output

        Note: running auditpol unelevated will result in the following error:

        Error 0x00000522 occurred:
        A required privilege is not held by the client.

        To complete the picture, this is what it looked like in the Group Policy Editor:

        Basic Audit Policy Settings Group Policy Management Editor

        Do we need all these security audit events?  Let us look at what some of the recommendations are.

         

        Security auditing recommendations

        Guidance from tools such as the Security Compliance Manager (SCM) states that if audit settings are not configured, it can be difficult or impossible to determine what occurred during a security incident.  However, if audit settings are configured so that events are generated for all activities the security log will be filled with data and hard to use.  We need a good balance.

        Let us take a closer look at these subcategories:

        Filtering Platform Packet Drop

        This subcategory reports when packets are dropped by Windows Filtering Platform (WFP).  These events can be very high in volume.  The default and recommended setting is no auditing on AD domain controllers.

        Filtering Platform Connection

        This subcategory reports when connections are allowed or blocked by WFP.  These events can be high in volume.  The default and recommended setting is no auditing on AD domain controllers.

        Directory Service Replication

        This subcategory reports when replication between two domain controllers begins and ends.  The default and recommended setting is no auditing on AD domain controllers.

        These descriptions and recommendations are from SCM but there is also the Policy Analyzer, which is part of the Microsoft Security Compliance Toolkit, you can look at using for guidance.  There’s also this document if you do not have any of these tools installed.

        Tuning audit settings

        Turning on everything – success and failures, is obviously not inline with security audit policy recommendations.  If you have an environment that was built on Windows Server 2008 R2 or above, the advanced audit policy configuration is available to use in Group Policy.

        Important

        Basic versus Advanced

        Reference: https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd692792(v=ws.10)

        If you already have settings configured in the basic audit policy and want to start leveraging the advanced audit policy in order to benefit from granularity offered by the latter, you need to carefully plan for the migration.

        Getting Started with Advanced Audit Policy Configuration

        In case you are wondering what I mean by granularity, see a comparison of the two below.

        Basic Audit Policy Settings

        In this example, I set the audit directory service access (DS Access) category to success:

        Example of Basic Audit Policy Settings

        Notice that all subcategories are affected as there is no granularity offered here (every subcategory is set to success):

        Outcome of Basic Audit Policy Setting

        Side note: take a look back at the Group Policy Management Editor window focusing on Audit Policy while we are here.  Notice that audit policy change is set to no auditing instead of not defined.  Here is the difference between the two:

        • Not defined means that group policy does not enforce this setting – Windows (Server) will assume the default setting
        • No auditing means that auditing is turned off – see example below

        No Auditing

        Advanced Audit Policy Settings

        On the other hand, the advanced security audit policy does offer fine-grained control.  The example below demonstrates granularity that could be realized when using the advanced security audit policies:

        Subcategory Setting
        Audit Detailed Directory Service Replication No Auditing
        Audit Directory Service Access Success and Failure
        Audit Directory Service Changes Success
        Audit Directory Service Replication No Auditing

        Example of Advanced Audit Policy Settings

        The output of auditpol confirms expected the expected result:

        Outcome of Advanced Audit Policy Settings

        The outcome

        After turning off basic security audit policies and implementing the advanced settings based on the recommendations shared above, the security event logs start to make sense since a lot of the “noise” has been removed.  We start seeing desired events logged in the security log as depicted below:

        Neat Security Event Log

        Keep in mind that these events are local to each DC, and that the event logs are configured to overwrite events as needed (oldest events first) by default.  Solutions such as System Center Operations Manager Audit Collection Services can help capture, centralize and archive these events.

        Till next time…

        Viewing all 196 articles
        Browse latest View live




        Latest Images