Jump to content


rdr222

Established Members
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

1 Neutral

About rdr222

  • Rank
    Member
  1. cleaning up old posts I did look into using WMI to directly query for new updates in the script however I ended up leaving an ADR in SCCM and have the script invoke it to run. I felt that it was easier to set the criteria (or update the criteria in the future) in the ADR itself than to have to retool the WMI query in the script every time it needs a change. The rest of the script then queries the SUG the ADR creates, takes the updates found in the SUG, creates a new SUG with a specific name, deploys the new SUG to appropriate collections with the agreed upon availability/deadline times for those users, deletes the ADR created SUG, and emails a notification to me with the log of the script. I have some alternate parts in the script that will consolidate any of the monthly SUGs into a baseline after they are 3 months old and deploy any baselines to the same list of collections that do not have the baselines already deployed. I also have some scripts that will email notifications to the end users when the updates are available to them and a reboot reminder the day of the deadline.
  2. Cleaning up old posts. I did contact support about this issue and after troubleshoot was told it is a bug. The workaround I'm using is to query the System Resource.Disinguished Name property instead of System Resource.System OU Name property with a value like CN=%,OU=OrganizationalUnit,DC=Domain,DC=com
  3. So I'm trying to better automate my update process in SCCM 2012 R2 SP1. Right now I have an ADR that runs in the evening on Patch Tuesday that finds the appropriate updates based on update classification, product, date, not expired or superseded. It downloads the updates it finds into the Deployment Package, creates a new SUG for the month and deploys it out to the first pilot group. Then the next morning I come in and change the name of the SUG and the deployment to meet the naming standard, deploy it out to the remaining pilot and prod groups and email the end users affected in each group. I know I can use powershell to automate the deployment to the additional groups that I currently do manually. I could probably even automate the renames and the email notifications. The question I have not been able to find an answer for though is can I script what the ADR is doing with powershell? Can I script out finding the appropriate updates based on the search criteria (product, classification, date, etc) and create a SUG and do all of this all in one self contained PS script setup as a scheduled task? I've looked through the PS cmdlets but have not found a way to search for the applicable updates based on the criteria like i can in the ADR wizard.
  4. I've done some more troubleshooting and have found that on the devices in question, if a Heartbeat Discovery runs (Discovery Data Collection Cycle run from the client) then the System OU Name will have the full path and the devices will fall into the correct collections. However, when the AD System Discovery next runs, the System OU Name will again be shortened to just the OU names, not the whole path, and the devices will again fall out of the collections. In the domain/forest that the primary site is a part of, this does not appear to happen. The System OU Name always contains the full path regardless of if the AD System Discovery updates the field or the Heartbeat Discovery updates the field. However, the devices this is happening to are not in the same domain as the primary site but instead are all in separate untrusted domains which are being discovered but not published to and do not have the AD schema extended. So now my question is, is this behavior because the untrusted forests have not had their AD schemas extended and/or the primary site is not publishing to them?
  5. On some of the SCCM device objects, the System OU Name property shows up as a full path, yet on other objects they show up as just the OU name. These objects are in the same OU and both getting discovered by the AD System Discovery, AD Group Discovery, MP Client Registration, and Heartbeat. It seems as if when you install the client, they will always show the full path, however when the AD System Discovery runs, it removes the full path for some. How can I make the full paths always show up (for collection query purposes)?
  6. we have our SCCM environment setup in a distributed model and scoped out for the various colleges/depts own IT teams to use so shipping them back wouldn't work since that campus is where that IT teams does it's work. I believe that having a DP at that location is the obvious way to go, i was just looking for some documentation to back up my claim as well.
  7. It is a fast link to the remote campus which is why the people in my dept that are against putting a DP there are against it. To them, since it's a fast link, their should be no reason to need a DP out there and there must be something wrong. We tried the reg tweaks and it didn't seem to improve performance much and i'm always hesitant to do those sort of thing because i figure if the tweak was better, it would be that way by default. I'm really trying to make the case that SCCM DP/MP should be a part of the core infrastructure that we place at remote sites like this and am looking for the documentation to back it up.
  8. we have a small VMware cluster that we use to distribute some of the core infrastructure (DNS, DC, DHCP, etc) at the remote campus however it wasn't built with the amount of storage that placing a DP there would require and the differing opinions are from the people that write the checks for that sort of stuff. If we have some documentation or I can show that best practice says SCCM should be a part of that core infrastructure we place at a remote campus it would be helpful.
  9. I have a 2012 R2 SP1 primary site that is servicing the main campus of the University I work at. All the site servers are located in the main campus data center. We also have a remote campus about 15 miles away from the main campus which utilizes the the servers in the main campus data center. For the most part this hasn't caused any issues, however when techs at the remote campus try to PXE boot and image a device, the TFTP portion of the boot process takes 10+ min to download the boot image as opposed to the 30 sec it takes on the main campus. Compounded when imaging multiple machines at once, 10 minutes turns into 20, 30, and so on, and is not feasible for the techs at the remote site. We have worked with our network engineers to verify that there were no problems on the network causing this difference and after A LOT of testing we determined that everything is working correctly as it is currently designed. The time difference comes from how TFTP works with the whole send 1 packet, receive 1 packet process. On the main campus, this isn't much of a problem but the minuscule bit of extra time between packets going back and forth from the main campus to the remote campus adds up to the extra time in the boot process (we actually drew out the math). So now, half of us are of the mind that we need to put a DP at the remote campus wants to start doing registry hacks and messing with DLLs to increase the TFTP window size. Is there any criteria (Physical distance, bandwidth, latency, clients managed, etc) on when it is appropriate to place a DP (or any other roles) at a remote site? Any documentation I can show about the matter would be helpful. Thanks!
  10. I opened a case on this and had been working with a support engineer gathering process dumps but the problem mysteriously disappeared after a few weeks. We hadn't made any changes or updates so I'm not sure why it fixed itself
  11. The workaround posted by Bitmapped is what MS also gave me. The support engineer I was working with acknowledged that it is a bug and said a fix will be included in the next CU
  12. Since upgrading to 2012 R2 SP1 I've noticed that memory usage will steadily climb on my site server to the point that after a couple days, I am unable to connect with the console or log into the server and have to do a hard reboot. The process that is sucking up all the memory is SMSEXEC.EXE. Before the SP1 upgrade this didn't happen. Is anyone else seeing something similar?
  13. I'm still waiting for MS to get back to me about it. I showed one of the support engineers what the issue is and he said they would try to replicate it and check with the product team if that is an "intended feature". I couldn't imagine it is since having to give access to All Systems and All Users and User Groups collection kind of defeats the point of being able to delegate and limit access with security roles. I'll update the thread once I get more info.
  14. I updated from R2 to R2 SP1 last week and one of the new SP1 features is the deployment verification of High Risk Deployments like OSD task sequences. When users try to deploy a task sequence, they go to choose the collection and see the new High Risk Verification prompt. The user can hit OK and choose a collection as normal. The next screen on the deployment wizard asks if this is an available or required deployment. When choosing available everything works as normal but choosing required and hitting next should pop up another verification depending on the contents of the collection. The user can verify that they want to continue with the deployment wizard. However, I’ve found that if the user's security role does not have access scoped to it for the All Systems collection and the All Users and Groups collection, choosing required and hitting next in the deployment wizard does nothing; no verification popup, no advance to the next screen. Since we delegate access to our users based on collections querying their department specific OU, and they do not have access to the All Systems or All User and Groups collection, none of them are able to run required OS deployments. I opened a case with Microsoft today but am curious if anyone else has any workaround or has seen this issue as well.
  15. So it turned out to be something with my boot image. I created a brand new MDT boot image through the SCCM console and used it for the task sequence instead.
×
×
  • Create New...