Jump to content


Is there a way to speed up BITS

Recommended Posts



firstly I'd like to say that this a great forum with hundreds of useful posts. Up to now I've just been a spectator as most of the answers to my questions have been answered already.


My question is regarding BITS. I have a small SCCM installation with a Single Primary Site, separate SQL server and two secondary site servers with distribution points. I have noticed that when adding new packages to the infrastructure, that it seems to take quite a long time before they are copied to the DPs. I checked on the Primary site server and I can see a temp folder with the files that increases in size until it reaches the size of the package in question. I presume this folder is SCCM creating the BITS package before it copies it over to my DPs.


My issue is that I added a 15GB windows 7 wim to my infrastructure the other day and it took about 10 hours to create and copy the package to my DP. The server does appear stressed out at all either CPU, RAM, Disk IO or network so I presume that BITS has a limit of some kind. I also noticed that I could not update any smaller, existing packages to my DP as they seemed to queue behind the large wim file which can be quite inconvenient if you need to get something out urgently.


I wondered if anyone had seen this and if there was a way of processing multiple BITS packages at the same time or if there is a way of speeding up the creation of a single BITS package? Do people use BITS for large packages or should I be turning it off?


I have looked at the site settings for the senders but these are already set to process multiple packages simultaneously. This is the same for all site servers.


Please help.


Many thanks



Share this post

Link to post
Share on other sites


BITS isn't actually used for traffic between a Primary Site and a secodanry site.

I would start by implementing the followign hotfix which solves a lot of issues with ditributing packages: http://support.microsoft.com/kb/978021


The traffic between the primayr site and the secondary site is handled by the Schedule for the sender to that site so make sure no limitations are in place there.

That would be a good start.

  • Like 1

Share this post

Link to post
Share on other sites

Hi Jorgen,


I've applied the hotfix and rebooted my SCCM servers however I am still seeing the same symptoms. This is what happens


1. Two new OS packages are created on the primary site server and in the Data Sources tab Binary Differential Replication is enabled.

2. The both packages are assigned to a new distribution point on the secondary site server.

3. I then select Update Distribution Points.

4. Under the root of the distribution point drive on the Primary Site Server there are two files created in the format "_S Meqcl.TMP" These two files as far as I can tell represent the two OS packages that I created.

5. These files first appear at 0kb the size of one file slowly increases until it reaches the size of the OS wim defined in the package.

6. The file increases in size by around 1GB per hour.

7. Once the file reaches the correct size a file appears in the same location on the secondary site server as the file is copied over to the DP.

8. At this time the second TMP file for the second package starts to increase in size as if it's only processing one package at a time.

9. The first file is then decompressed in the SMSPKGS$ folder on the DP which takes around the same time as the compression did.

10. The second package then follows the same steps as the first until both packages are installed on the DP.


The problem with this is that any package updates that happen after the initial distribution starts get put into t a queue and it can take hours before everything filters though, it even took a day in the case of one 30GB wim.


I just wondered if there was a setting I'd missed or something, maybe I shouldn't be using BITS for large packages?


Under resource manager on the Primary site server, disk access remains constant at about 1Mb per second. RAM, CPU and network usage all remain pretty consistent, I have tested R/W on the disk and it will happily transfer data many times faster than this so I know it's not a hardware limitation.


Does anyone have any more ideas?


Much appreciated.



Share this post

Link to post
Share on other sites

Ok so I've done some more research and I seem to have my terminology confused so apologies for that, the issue is with site-to-site package replication. I did some research and it seems like this might be to do with the fact that the addresses that connect my sites are set to unlimited. It sounds like this limits the site server to process only one package at a time. Is this correct?





Share this post

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.