Jump to content


BrianGW

Established Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by BrianGW

  1. All,

     

    I could really use some help as I'm pretty lost with what to do or try next. Basically, I have a Windows 7 image that I created in April that includes the April updates. During OSD, I am attempting to install the newer updates and updates for software such as Office. This is the first time I am doing this so I made a test one by copying another task sequence and deploying it to the Unknown Computer container. I disabled most of the steps including things like software installs just to make this faster for testing. So basically, my task sequence applies the OS, makes some settings changes like joining the domain and applying the key. Next, the client is installed with the following installation properties "SMSMP=server.domin.local CCMLOGMAXSIZE=5242880 CCMLOGLEVEL=0 CCMLOGMAXHISTORY=3 CCMDEBUGLOGGING=1." Normally, the system would then install the software but that's disabled.

     

    During my most recent test, I have added two other steps that were not there previously. They were the first two in the list below related to the machine policy. They were not there previously and I have the same outcome with or without them.

    1. Machine Policy Agent Cleanup "WMIC /namespace:\\root\ccm path sms_client CALL TriggerSchedule "{00000000-0000-0000-0000-000000000040}""
    2. Validate Machine Policy / Assignment "WMIC /namespace:\\root\ccm path sms_client CALL TriggerSchedule "{00000000-0000-0000-0000-000000000042}""
    3. I then put in a 60 second delay "Powershell.exe -command start-sleep 60"
    4. Scan for updates "WMIC /namespace:\\root\ccm path sms_client CALL TriggerSchedule "{00000000-0000-0000-0000-000000000113}" /NOINTERACTIVE"
    5. 30 second delay "Powershell.exe -command start-sleep 30"
    6. Install Software Updates with the option for Mandatory Software Updates

     

    The first few times I tried this, there were a lot of updates so I trimmed it down to one update released in May, KB3046002. That update is in a Software Upgrade Group by itself. I figured if I could get this to work, I can start adding more in as the size of the group does not appear to matter. It does not work either way. When I configured this, the group was deployed to the unknown computers collection as required with an availability and installation date of as soon as possible.

     

    Everything seems to complete successfully but when I log into the computer, the update was not installed. If I comb through the logs, it tells me that the update is not required. However, if I go to Windows Update, the update is there and required for installation. I can also download it and install it manually without an issue.

     

    Any help would be appreciated.

  2. I have been looking for a reliable report as well. I was using the Compliance 1 - Overall Compliance report for a long time until I started to have problems with my servers showing as non compliant or compliance unknown. I would check the server and it would appear as though the server was actually compliant. I would try to force a state message but that would not resolve the issue. After submitting a Microsoft ticket, they told me that the Overall Compliance reports works off of WSUS and not the actual deployments I made. They told me to utilize States 3 - States for a deployment and computer. However, I question whether or not that is the correct report as well.

     

    I would love to hear other people weigh on this as well.

  3. Hello everyone,


    I am having a very strange yet interesting problem. I am hoping that maybe someone else experienced the issue or can offer some other insight into the issue. I have been opening tickets with both vendors and have been working on the issue for weeks at this point. I am hoping that maybe a set of eyes from the community can offer some additional insight.


    I have two UCS 5108 blade chassis each with five Cisco UCS B200 M3 blades. On each server I have loaded Windows Server 2012 R2 Core and have installed Hyper-V. I have a separate server running System Center Virtual Machine Manager (SCVMM) 2012 R2 with the most recent rollup. On each server, I have presented 4 vNICs. One is for the DMZ and the other is for a temporary connection that is later removed. The other two connect up to a trunked port for all my VLANs, one on Fabric A and one on Fabric B. Using SCVMM, I create a virtual switch to do VLAN tagging on a Windows teamed NIC of both the vNIC on Fabric A and Frabic B. That switch then receives three virtual adapters, tagged for each of the VLANs.


    The problem, unfortunately, is completely random. I can deploy the same Cisco templates and the same SCVMM virtual switch template to all ten servers. I am losing connection between servers over some or all VLANs. To make things even weirder, it isn't always all VLANs. So I have three VLANs and some servers have no communication issues while others don't communicate at all where others will communicate over VLAN 2 but not 3 and 4.


    So far my troubleshooting steps included recommendations from both Cisco and Microsoft support.


    • We removed the teaming with the SCVMM virtual switch entirely and am only connecting the NICs on Fabric A. This has eliminate the theory that it was a pathing issue.
    • The issue spans across both chassis one and two, so it isn't central to a particular chassis.
    • If I create a vNIC that only has the native VLAN of 2, 3 or 4 and set the IP within Windows (removing SCVMM from the picture), everything works fine.
    • If we do a Wireshark, we can see that the server that "doesn't work" is not actually get its ARP requests out to the fabric interconnect.


    Obviously, the last one is key as that is the root of the issue. Without the ARP, it won't communicate. However, the issue has been in attempting to figure out why the ARP request is not coming through. Has anyone come across a similar issue?


  4. My understanding is that if a machine misses the schedule, it will attempt run the process the next time the client starts. It is normally recommended to leave it on the simple schedule because it will randomize the times that it will run. By doing the custom schedule, all machines run and report back to SCCM at almost the same time. It creates a higher workload on not just the server but the network. After hours it probably doesn't matter much but if you have a lot of machines checking in first thing in the morning, that could cause an issue.

     

    There is an easy way to test though. Take a test machine and put it in its own collection. Deploy a custom schedule to that collection for something in the afternoon. Make sure the machine receives the policy and then shut it down. Let the time window pass and then turn it back on. Track the logs and see if it runs.

     

    As for your antivirus question...wish I could help but I avoid McAfee like the plague. We are actually running SCEP on our systems.

  5. What I have done in the past was do a search for Required is greater than or equal to 1. This should give you all of the updates that are required on at least one machine. Take those updates and create a deployment. Deploy that to how you want / need.

     

    If a machine already has the update, it will confirm that it is installed and then skip it. It will not attempt to install it again.

  6. I believe the root of the issue is that the client isn't getting a policy.

     

    Did you verify that boundaries are configured properly? The primary server for the child domain should be associated with the boundary for those machines.

     

    Did you verify that your policies are correctly configured within SCCM?

     

    Did you verify the collections you have created for that domain and that the policies are properly applied?

     

    I would also take one of the client machines and run RSOP on it. Go to Computer Configuration > Administrative Templates > Windows Components > Windows Update. You should one setting there that says "Specify intranet Microsoft Update service location" with a state of enabled and the GPO name should be Local Group Policy. If you double click it, is it pointing to the correct primary server?

  7. Personally, I have run into some issues in the past doing things like that. In particular, you have no way of knowing whether or not the script worked because you can't pull reports on something like that. If your only detection method is that the file copied, you can't be guaranteed that it runs.

     

    In your case, since you are uninstalling software, I would recommend doing things a little differently. I would create an application for Lync (assuming you don't have one yet) and ensure that the detection method is fully functional. At that point, I would create a deployment to uninstall the software and assign that to whatever collection you are deploying Office 2013 to. Set this deployment to go out before the installation.

     

    What that is going to get you is confirmation as to whether or not the application was uninstalled on all computers. It will give you actual metrics so you can measure what machines had an issue and what machines didn't.

  8. I had to do a similar thing. My office was planning a migration off of older equipment onto newer and during our planning phase we lost two hard drives in the raid 5 and we lost everything. We are currently a one server environment. I have no other DPs. So this is all I had to do for mine. I mention this because you didn't mention if you had any other DPs or anything of that nature.

     

    Basically what I did was a backup and restore to another server. So you would need

    1. The backup that you run from the Site Maintenance options.
    2. A backup of the package source files

    If you are unsure of where those package source files are, you can run the following SQL query.

    SELECT * FROM v_Package

     

    It has been a while since I had to do this but once the restore is done, you need to go back into the user accounts and reenter the passwords for the accounts. I also double checked ADSI Edit to make sure that the server name was right and checked some of the client settings to make sure everything looked good. As long as everything is configured correctly, the clients should figure out the name change.

  9. You can additionally set the version property in your product code detection rules.

     

    I was going to suggest this as well. I have a similar issue with the Outlook CRM client. I actually use the client version when you right click and hit properties on it.

     

    The other option is check the registry. Is there anything different in there? Sometimes a key will list the full version of the application.

  10. Actually, I ran into the same issue and it is not the detection method.

     

    The problem you are having is that once the system runs one of the deployment types, it does not run any others. So once it runs the first script, it will ignore the remainder of the deployment types. The deployment types are more for picking which systems to install on, such as a 32 bit deployment and a 64 bit deployment.

     

    I would recommend the following steps.

    1. Create an application for the previous version that you are looking to install. The most important part here is that the detection method and uninstall strings work.
    2. Create the one with just the one deployment type.
    3. Set this new application to supersede the previous one. Make sure the uninstall check box is on.

    This way...when you deploy the new version, it will uninstall the old version first. Using this method will also remove the old version from the available software for users to install.

     

    P.S. - Anyone out there...if I'm wrong...I would love to know because this is how I do all of my deploys.

  11. You shouldn't need the GPO for the updates server and that may be part of the issue. I know that I ran into the issue in my environment where the GPO and the client were fighting each other. Check the UpdateTrustedSites.log to make sure that your computer is adding your SCCM server to a trusted location. This was my issue.

     

    Is the administrator configured in the Users (Administration workspace)?

  12. Since this is a new installation and not just an update, what I do for this is set the new application to supersede the previous version.

     

    So basically, I would create a new application for version 3.1. Once it is created, right click the application and one of the tabs says supersedence. You can then add the version 3.0 application to it. You just then need to deploy the application again. You can also set a few other options like uninstalling the previous version.

  13. My understanding is that no, you don't need the ContentLib folder. I was told that you only need to backup two things. You need a backup of the automated backup SC Config Manager does (assuming you have that configured). You also need a backup of your Package Source Files. If you aren't sure where that is, you can go to the database and run the following script against your database.

    SELECT * FROM v_Package
    
    

    I just look at the packages I created and backup those as in a disaster the built in stuff should be recreated.

  14. I have been working with SCCM 2012 SP1 and I feel like the current process I am using for Software Updates is a bit messy. I have a few questions about whether or not I am doing it right/best practice or just to find out how others are doing it. Here is how my environment is laid out.


    My device collection uses the naming convention of "SUG : OS : Domain : Purpose." So examples of my names would be SUG : Windows 2008 : Production : DC, SUG : Windows 2008 : Production : App, etc.


    For my Software Update Groups, I first search All Software Updates with the Product, Superseded=No, Expired=No, Update Classification=Critical or Security and Required=1. I then take those results and create a group named 2013-09 Windows 2008, 2013-10 Windows 2008, etc. So basically, it is the year and month and the OS version. When I deploy them, I make the deployment name a combination of both names. So basically, for the September 2008 updates, the deployment name will be "2013-09 : SUG : Windows 2008 : Domain : App." I also never modify the deployments once they are out there. So the 2013-09 updates would have a required installation date and time of 9/21 and I never go back and modify that.


    I am currently building Software Update Groups for Windows 2003, 2008, 2013, XP, 7 and 8, Lync/Office/Silverlight, Lync Server 2010, Exchange 2010 and SCVMM 2012. As you can imagine, having one package for each of those a month is extremely messy when you look at the Software Update Groups. Granted, I can do a search for something like 2013-09 and get the ones from the month, but overall there are a lot of groups.


    I guess at the end of the day, I would ask the following questions.


    1. Is the current setup I am using normal? Am I really going to have this many update groups constantly with no end in site?
    2. Should I be going back and modifying the previous deployments to just being available?
    3. Would the suggestion be to create a yearly or baseline Software Update Group and once the updates are installed, move them to this Baseline group and then destroy the one that I had originally created?
    Any assistance...even if it was just one question or idea...would be potentially helpful and I would appreciate it.




    Thank you in advance.

  15. Hello everyone!

     

    I am having an issue and was hoping that someone here could help with. I have been searching for an answer but have not come up with anything concrete or that I fully trust. So if this has already been addressed on this forum and I haven't found it, I do apologize.

     

    I currently have a SCCM 2012 SP1 environment. I have created a Software Update group and published it to all of my servers. The updates have worked on all of my standard (full gui) servers without an issue. I have made them just available in some cases and some were required and they all went off without a hitch. I now am having to tackle our Hyper-V hosts, which are in clusters and are Server 2008 R2 core. I have made the updates available to them but I can't seem to figure out how to get them to install. I have confirmed that the servers are reporting that do have the client installed, active and are approved.

     

    This is what my research has turned up.

    1. I have both Core Config and S Config but neither one works. I didn't expect it to but I also figured it could be a shot in the dark that worked. None of them find updates that are good to go.
    2. I read that you can manually run the SCClient.exe from C:\Windows\CCM but that just produced an error. Further research revealed to me that this should work in Server 2012 core because that has .NET installed on it.
    3. I then found this post... http://social.technet.microsoft.com/Forums/en-US/e0b6d7c3-a199-4f1e-aff5-60ba17fcef43/install-patches-from-sccm-2012-on-windows-server-core but it is looking for the updates in the CCMCache folder, which appears to be empty.
    4. I found another script (whose link I seem to have misplaced at the moment) that is a powershell and a vbs script that is supposed to go through and install everything, but it doesn't give any word as to whether or not the systems reboot immediately or if that script actually fully works.

    When I run Compliance reports on the server, it says that it is out of compliance and it will mark off all of the approved updates that are not installed. So from what I can tell the server knows that it is missing updates.

     

    The only other potential idea that I had was in Client Settings, under Software Updates, change Enable software updates on clients from Yes to No. I am hoping this allows the SCCM client to continue managing all aspects of the server that I configure except for the Windows Updates. I know that I basically lose control of updates so to speak, but that is not a huge deal for us.

     

    Any assistance at all would be appreciated.

     

    Thank you.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.