AD Consultants

  • Last Post 22 May 2014
SmitaCarneiro posted this 13 May 2014

Would anyone be able to recommend a good consulting group/consultant for AD?

We plan to start on working on a new forest to eventually replace our current one (Server 2008R2 but 2003 domain and forest level)   Thanks,   Smita Carneiro, GCWN

Order By: Standard | Newest | Votes
rwf4 posted this 22 May 2014

We are doing the same with thing  /32’s to site things for testing in a controlled fashion.


One other element we’re employing is blocking the test site DCs from Exchange and then introducing them in a controlled fashion.  


To do the exclusion:

Get-ExchangeServer | Set-ExchangeServer -StaticExcludedDomainControllers dc1fqdn,dc2fqdn


To check status:

Get-ExchangeServer -status | ft name,staticexcludeddomaincontrollers,staticdomaincontrollers,staticglobalcatalogs


To remove the exclusion:

Get-ExchangeServer | Set-ExchangeServer -StaticExcludedDomainControllers:$NULL



Once stabilized and in the proper sites, we’ll do the same to remove the coverage for the legacy DCs  in the Exchange sites and finish weaning the last reaming

apps from the legacy DCs


This has been a great thread. Thanks to all!



MittlemanR posted this 16 May 2014

One tiny suggestion to add –


For our domain upgrade, I created a “PREVIEW-SITE” and put the first of the new DCs there.  That way, while we’re vetting all our servers and apps, nobody fails. 


I go into AD Sites and hard-code temporary subnet entries for selected test servers (Add IPv4 subnet nn.nn.nn.nn/32;  description includes server name; and

assign the IPv4 address to PREVIEW-SITE.) 


For example, first I pointed one each of every flavor of UNIX & LINUX to the new DCs.  Passed.


Then I pointed selected test and development servers – IIS, SQL, SharePOint, Biztalk – to the new DCs. 




mi6agent44 posted this 15 May 2014

I like this…I will run this through the lab and possibly implement.


Round robin can be touchy though, especially in Citrix land.


Good stuff! Thanks!



Rajeev Chauhan posted this 14 May 2014

All enterprise are not same AD is same.  It your case being edu privacy laws would come hold more. So whatever you plan your schema well and Keep it Simple.  Most appliance and technology listed are helpful.



kbatlive posted this 14 May 2014

>> If the main data center goes dark, change DNS for the to the failover site ldap2 IP and production continues.>>Future plans are for a dedicated GEO site failover appliance that will perform the redirect automagically. (technical term J)  We are already doing this using our load balancer (DNS appliances).  It is site aware and knows the network topology (ok, someone entered that topology).  We have 3 basic “sites” from the LB’s awareness:  datacenter1, datacenter2, everywhere else (hundreds of locations). We delegated a DNS zone (i.e. like – to the two appliances and name resolution (for is then sent to the LB’s – who use their rules to return the address of a DC.  Note: this doesn’t work for LDAPS (not yet…I need to create a SAN cert for ‘’ to allow LDAPS to the DC’s behind that DNS name). The general LB targeting rules are:  if caller is in datacenter X, return address in datacenter X IF the target system (i.e. domain controller) is up in datacenter X, otherwise, return target in datacenter Y (it can check different ports before it returns the IP address to determine if a system is “up”). So calls from systems in datacenter #1 return domain controllers in datacenter #1 – IF the domain controllers are “up” (port 389 responding); datacenter #2 does the same for DC’s in datacenter #2.  If the system is remote (from either datacenter) – it basically round-robins between DC’s in the datacenters (also checking if they are “up”). I have 3 DC’s in each datacenter per domain that become the “targets” for returning an address to a domain controller for each domain (and we have two domains so 12 DC’s in total). It also has a “sticky” function so that if a system is directed to a specific DC, it will ‘stay’ with that DC (if that DC is up) – apparently this was an issue for some stateful applications (I forget the specifics as to “why” – having the sticky helped those apps – so they were doing more than just pure 389 LDAP queries). We also use DNS round-robin for an LDAP entry – of course, that is pure DNS round-robin – so if a DC is down, you’ll get some failures – that was implemented (and published) before we had the load balancer and there hasn’t been a push to get people to change their applications to use the load balancer (OK, we’ve been lucky…or maybe good J )    


daemonr00t posted this 14 May 2014

Keep in mind that you'll get a one year access to the AD RaaS tool so you can use it during and after the upgrade process :)


Sent from my Windows Phone vía Exchange Online


Cynthia posted this 14 May 2014


You sound like a natural architect and those are hard to come by.

My unsought after 2 cents.


Good Luck.


Cynthia Erno

Server Applications & Fileshare Administrator

Department of Corrections & Community Supervision (ITS)

(518) 408-5506




mi6agent44 posted this 13 May 2014

Ok, although I feel like I’m bragging;


60k Users, 28k devices and 300 plus client server application environment. SLA is about one nano-second.


We have 40 zero tier applications for healthcare that were written by a wide variety of vendors. To suit this mixed bag of cats

I began a 6 month campaign of sorts to move all these apps to the VIP. By using a F-5 LB we opened both

LDAP/LDAPS ports and gently asked our admins of their respective applications to move to the LDAP VIP. Once the top tier

applications were moved we established this as a published standard for any net new. To meet DR requirements I then

created a second VIP at another data center with a small subset of DC’s within the same site and domain. Applications within that

that data center pointed to ldap2 to avoid the “don’t span the wan” gotcha. (Old Novell guy…sue me!)


Secondary site was made for example.


In some cases there were some big applications that required three DC targets…dc1,dc2,dc3 and their rollover was linear.

My thought was primary:, secondary: and tertiary  Bullet proof anyone?


If the main data center goes dark, change DNS for the to the failover site ldap2 IP and production continues.

Future plans are for a dedicated GEO site failover appliance that will perform the redirect automagically. (technical term



Netscaler can do this as well as others…I also suggest enabling “health checks” at the LB for seamless “where’s my good DC?” to prevent session hangs

and Service Desk phone meltdown.


We use Bluecat for DNS, IPAM and DHCP. A semi-painful conversion but well worth the reliability and management ease. Trust me.

All public facing DNS and DHCP appliances in this system can run headless for a time during major outage and are much more robust

and secure than a Windows box.


Tunnel carpel! I’m out! Impress you directors! Get more sleep! All this can be yours!





SamErde posted this 13 May 2014

Being 30% or so into a forest consolidation project that is moving into a new forest....I have to say that David's high-level plan sounds sublime. 
Using a VIP to load balance or redirect LDAP requests is a new one for me. Is that something that can be done on a Netscaler Access Gateway? Can this kind of thing also be done for DNS? 


daemonr00t posted this 13 May 2014

Well… most of the large IT companies do the delivery offshore… and so far nothing has burned out

J (well… some do).





mi6agent44 posted this 13 May 2014

Sadly all MS output is limited by disclosure agreements. (Kerberos)  



mi6agent44 posted this 13 May 2014

Another potential snare.


KB2774190. This KB relates

to Resource SID Compression in Windows Server 2012 and specifically issues involving user authentication to NAS devices.


Mike break access to NAS shares if you are an EMC customer. This setting in 2102 is on by default.





danj posted this 13 May 2014

Perhaps you can share this information. Dan  


mi6agent44 posted this 13 May 2014

That wasn’t Microsoft’s take when we did  a full paid for pre-assessment.


There are TechNet articles that support this.


In this case we don’t know if the environment has  NT 4.0 or lower level patched 2000 boxes.


I would be wary here NetBIOS adventurer.





danj posted this 13 May 2014

Forest functional level is unlikely to have any bearing on most of your applications. Dan  


kevinrjames posted this 13 May 2014

IMO that’s not a reason to abandon it. If you’re concerned about Office 365 or other cloud solutions, then there are plenty of solutions to that problem.


If your AD schema is badly broken or you’ve had an AD ‘expert’ recommend dumping the forest, that’s another thing.





mi6agent44 posted this 13 May 2014

@Carneiro, Smita A


I wholeheartedly agree. No swing upgrade. (see nightmare)


Try this:


Have MS do a full Rap on your environment. (safety’s sake)

Clean, prune and document your current forest/Domain and infrastructure per MS recommendations.

Patch all DC’s and DNS servers to level.

Upgrade Forest Level  to 2008.x

Swap replace the current domain controllers for 2012R2 server, hardware and budgets permitting.

Create a VIP for with a pool of domain controllers for legacy applications that require static targets.

Install certs on the LB to support LDAPS for application that may require it.

Install Netwrix AD Auditor so you know what is going on in your environment and to meet the impending compliance requirements that we all are being tasked with. (Provide drive space for 7 year retention in SQL)

Infosec will thank you.

Leave the forest functional level at 2008 for backward compatibility as there seem to be to many application unknowns.

Have MS do a full RAP on your environment to proof and post-deployment correct missed issues.

Take a nice vacation after the accolades received.


Oh…and for this fine and fancy mini-project plan?


I need a place to couch surf for my 50 state bucket list…Indiana?  Never been. Seems nice.






SmitaCarneiro posted this 13 May 2014

It’s not just applications that I’m concerned about. We’re also trying to take the opportunity to have a fresh start, though from what many of you

are saying it will be Herculean task.

And our domain is a .lcl one so making it a publicly routable domain is another issue that we’re trying to take care of.








a-ko posted this 13 May 2014

This depends…
Review the AD to look for inconsistencies in the environment. If replication is broken, DCs and trusts were made and no longer exist; and you see lots of weird event log errors sometimes it’s better to start fresh.
A big recommendation to start fresh is if you're looking to migrate from a “.lcl” or “.local” domain infrastructure to something more recommended, such as “”, combined with the above issues.
Especially if you're combining this AD effort with network changes, directory object cleanup, DNS cleanup, etc. Sometimes it’s just nicer to start fresh. But just really depends on the environment…
We started fresh because we needed to implement security controls from the bottom up (DISA STIG) and the previous applications break when those things are implemented. We wanted to identify at very early stages what would work and what wouldn't work. Having the parallel domain effort meant we could slowly migrate individual users and the organization wasn't affected greatly if some of that user’s applications broke.
My 2 cents…
Sent from Windows Mail


kevinrjames posted this 13 May 2014

You’re probably more likely to break applications by migrating rather than upgrading. Build new Domain Controllers and introduce them gradually, decommission the old ones as you go.


Applications generally don’t care what version of DC OS they target, but there are some considerations. It’s significantly safer to gradually work through them than cut them over to an entirely different domain/forest.


Don’t abandon your existing AD so easily.





Show More Posts