Design AD for branch locations with poor bandwidth

  • Last Post 07 July 2016
MatCollins posted this 30 June 2016


I have been working in a company as AD administrator for past couple of years and now they have recently decided to expand their AD infrustructure to branch locations around the country.

Some of these branch locations suffers poorly from low bandwitdh. Here is the list:

  • Some branches with 512 Kbps (will be updated to 1 Mbps next year)
  • Some branches with 256 Kbps
  • Some old school branches with 64 Kbps (Yes it still exist sadly! )

So since I have been really away from designing branches with super low bandwidth, I was wondering if you guys know any resources (books, documents) available which covers the design approaches for these kind of branch offices and best practices for this approach.


I am also willing to provide more information if you have any insights towards this situation.




Order By: Standard | Newest | Votes
dvasilescu posted this 30 June 2016

You can have a look at the table in the article, might be a good start.


MatCollins posted this 02 July 2016

Thank you.

Any ideas how we can calculate required bandwith between clients and DC? I am struggling to calculate a client required bandwith to a DC. This bandwidth should allow Logon,Group Policy, .. 

Any ideas?

ZJORZ posted this 02 July 2016

Do you want to allow logon over the slow to "central DCs" or are you thinking in hosting a (virtual) DC in the branch connected through the slow link?

Any other infrastructure at those branches?

What is the average number of users, the lowest number and highest number of users at each branch?

Met vriendelijke groet / Kind regards,

Jorge de Almeida Pinto



Tel.: +31-(0)6-

(+++Sent from my mobile device +++)

(Apologies for any typos)


gkirkpatrick posted this 02 July 2016

And if there are servers running at the branches, are there any AD integrated applications running on them  (or on the clients for that matter?)





MatCollins posted this 03 July 2016

Thanks for the reply everyone.

Talking about connectivity between those client to DCs, Let me explain a little bit.

We have devided the country into 30 regions with corresponding 30 sites and for each site we have 2 domain controllers available. Current clients of these regions are getting authenticated to their corresponding domain controllers which is good at the moment. Here comes the branch offices in each region which normally have poor bandwidth to their domain controllers of their region.

Jorge, talking about number of clients in each branch office, it is very vary. In one branch office, we have 3-4 clients and there are also branch offices with 90 clients in them. So At first I thought that it is good to put a DC in those high peak branch offices.What do you thing over this? What do you suggest for those branch offices with heavily poor bandwidth (64 Kbps)? 

Patrick, we do not have AD integrated applications in our branch offices. How does that affect the planning scenario?

MatCollins posted this 05 July 2016

Any more ideas toward this?


jeremyts posted this 05 July 2016

<Rant on


For me there is much more to this than just DC placement.


For example:

What other services need to be delivered?

Are these fat clients?

Windows Updates?

Software Deployment?

File and Print?

Do the offices work collaboratively?

Do you have Skype for Business, or something like that?


What about some WAN optimisation technologies that may help to reduce the number of DCs deployed and yet allow services to be better consumed? Something like

Cisco Meraki and Riverbed. Or of course NetScaler if you’re a Citrix environment. But then again you wouldn’t really have this issue with Citrix as most infrastructure would more than likely be centralised.


I agree that you’re going to want to place a DC in the larger sites. I would just go with 1 and allow your failover to be your main data centre. Once again

I make this statement without understanding your environment.


The nice thing about the Riverbeds is that certain models will host a VMware environment. Not huge, but enough for a DC, File and Print, etc. This means that

you can place the Riverbed appliance in the switch rack on-site and don’t need to worry about server hardware and racks for sites. These can also be core server deployment so you streamline the builds and patching where possible. Some may frown at a design

like that, but it works!


Sure you can get assistance and ideas from these forums, and there are plenty of awesome people listening. And I have no doubt many have been in your situation

before. But here you’re talking about architecture. If you don’t have anyone on-board to help you with that, I would strongly recommend you engage a specialist and do it properly. Someone with broad skills…not necessarily an Active Directory specialist.


I understand there may be budgetary constraints. But before you consider creating DC sprawl, think about the cost to manage that, and what problems is it really

going to solve?


>Rant off


Sorry, I needed that rant. I see this all too often and it winds me up







ken posted this 05 July 2016



OP seems to be approaching the whole design from a limited technical viewpoint – i.e. how many users justify a DC. But what do those

users need a DC for? Authentication? Access to resources? And what is the implication if a local isn’t accessible or available? Is someone going to die? Are they going to be in breach of some regulatory framework? Or does it just mean that users will take

an additional minute to logon?


ON the counter side, what’s the operational implication of a huge number of DCs? What the per-server operational cost (hardware, licensing,

monitoring, patching)? What’s your additional security risk exposure by having far flung DCs in unsecured locations (compared to good data centres)? What happens when you do your next AD restoration – do you have OOB to each DC, or do you need to have dozens

of techs locally onsite to do this?


Whilst technology presents problems to be overcome, IMHO, the main drivers are business requirements, and one of the main components

of that is cost, which seems to be ignored here. On a pure cost basis, maybe upgrading the WAN links (or putting in WAN optimisation devices) might prove cheaper than deploying more servers.



g4ugm posted this 05 July 2016

In my humble opinion the critical part of any branch office deployment is not really DC placement, although of course it is important. It is more to do with the non-dc parts of account management such as profiles, drive mappings, file storage.  Dave 


ken posted this 05 July 2016



And even then, depending on the industry you’re in, reliable WAN links are possibly more important. At least in retail banking, my

experience is that access to core banking applications are far more important than file shares and profiles. Other verticals are probably different. YMMV



g4ugm posted this 05 July 2016

Actually the first rule is that if you can’t logon you can’t work. So if you need to sync the desktop to logon then you can’t access core banking functions until its synced, profile syncing becomes critical. On occasion I have resorted to un-plugging the network cable on a PC to force log-on with cached credentials in order to get logged on to a PC on a VPN. Of course you do need remote links to be reliable. I once had a change request for an MS Exchange update bounced because I was going to do it remotely and I hadn’t mentioned the possibility of network failure. I pointed out to the change manager that has the remote site had multiple levels of redundancy on its WAN links installed with virtually unlimited budget if the Network failed then the last thing they would be worrying about was my Exchange update, and if it did I would be hiding under the table while the poo flew over head. I guess that that also illustrates that business need is important. No point on spending in something you don’t need. On the other hand, if you need bandwidth to work and havn’t bought enough then no amount of bodging will fix it. I wonder if in for the small offices a remote desktop type solution might work better, BUT even this needs some bandwidth.  DaveG4UGM  


ken posted this 06 July 2016

If a local DC is not available, you can still logon – remote DC, or cached credentials.


Most banking teller type people don’t have roaming profiles etc. – they have a very limited set of applications that allow them to

processes transactions, and product applications.



robertsingers posted this 06 July 2016

Or they use something like citrix.


patrickg posted this 07 July 2016

It really depends on if network connectivity is required for every location, if they need to function with it down for x hours then one set of issues arise but if they cannot be down

another set of issues and possible solutions arise.


When looking at this, also factor in support the environment. If network access is generally present or it is a hard requirement for a number of the more critical business applications

then I would honestly take a look at doing VDI with PCOIP over VPN/MPLS. For most sites and extending out as time/bandwidth become available for others.


You gain a couple of benefits


Latency to DC’s is never an issue…everything is kept in the datacenter


All applications function at lan speeds, you only need a larger pipe wherever the VDI servers are.


Updating is centralized to all locations and quick to end users.


The data stays in the datacenter, if there is physical theft at a location there is never the issue of what went missing


Support – no servers at the remote sites (outside of geolocation DR needs)


Support – Desktop team, keep a spare thin client on the shelf and a UPS/FedEX/USPS box next to it. If there is a faulty client anyone can unplug and replug 4 cables over the

phone drop ship the faulty unit back to corporate. I’ve seen a few orgs save on FTE costs here (or pickup projects they didn’t have time for), no need to send techs out to each site every time there is a hiccup.


Simplicity for users, each site is the same and all applications run at lan speeds. If someone sends a large email, the smaller sites don’t grind to a halt.




Upfront complexity, if you’re not familiar with it there is a learning curve.


Single channel ISDN is going to be a problem, actually will cause issues for most every solution…some may “work” but do they really work?


Capex can be high but often Opex efficiencies gained and increases in security more than offset this.