Jump to content


joeman1881

Established Members
  • Posts

    91
  • Joined

  • Last visited

Everything posted by joeman1881

  1. Deleting the machines that the NIC's were first associated with did not fix the issue. These two are still receiving a 0x80004005 after searching for available deployments.
  2. Any reasoning why some will skip past PXE because they are known, and the others I am noting get into PE and then fail? I just want to understand if it's random or what the reasoning is. Thanks for the reply
  3. New computer scenario. These are 40 devices that were sent to us NIB. Everything I am finding leads me to think it's MAC address related still. I only have 6 Surface USB dongles for imaging all devices. I found that deleting the machines from the console after imaging corrects the issue of getting into PE, and in most cases allows the machines to image because I've already boxed the previous machines up, in turn not allowing a hardware audit to run even if they are rediscovered. The issue at hand is getting into PE and from the logs I can see this: Client lookup reply: <ClientIDReply><Identification Unknown="0" ItemKey="16780947" ServerName=""><Machine><ClientID/><NetbiosName/></Machine></Identification></ClientIDReply> SMSPXE 2/27/2014 9:30:45 AM 3440 (0x0D70) 60:45:BD:F9:98:A9, 63898079-893F-74F0-892B-C860C64E9F88: device is in the database. SMSPXE 2/27/2014 9:30:45 AM 3440 (0x0D70) Client boot action reply: <ClientIDReply><Identification Unknown="0" ItemKey="16780947" ServerName=""><Machine><ClientID/><NetbiosName/></Machine></Identification><PXEBootAction LastPXEAdvertisementID="" LastPXEAdvertisementTime="" OfferID="" OfferIDTime="" PkgID="" PackageVersion="" PackagePath="" BootImageID="" Mandatory=""/></ClientIDReply> SMSPXE 2/27/2014 9:30:46 AM 3440 (0x0D70) 60:45:BD:F9:98:A9, 63898079-893F-74F0-892B-C860C64E9F88: no advertisements found SMSPXE 2/27/2014 9:30:46 AM 3440 (0x0D70) 60:45:BD:F9:98:A9, 63898079-893F-74F0-892B-C860C64E9F88: No boot action. Aborted. SMSPXE 2/27/2014 9:30:46 AM 3440 (0x0D70) 60:45:BD:F9:98:A9, 63898079-893F-74F0-892B-C860C64E9F88: Not serviced. SMSPXE 2/27/2014 9:30:46 AM 3440 (0x0D70) I will next try to locate the machines that these dongles belong to, and I will delete them and re-attempt a deployment. I will update as I go. Thanks for the reply
  4. So looking deeper I found that the machines are not matching any deployments. This leads me to think it still has something to do with reusing the same NIC's. This is going to be a huge issue because we are deploying over 1000 of these machines and moving towards 8000 of them over the next few years. Anyone have any advice on how I can somehow not capture the MAC address of these machines individually? It's a nice reference to be able to check the MAC of typical machines, but clearly that isn't going to work in this scenario.
  5. I received 40 loaner gen 1 Surface demo machines yesterday and noticed I’m now getting a 0x80040005 error for only a few machines when I get into windows PE and attempt to start my task sequence. The odd thing is, they are all the same machine meaning no difference in make/model. It pops up right after putting in my credentials to view the list of available task sequences. It’s almost like it looks and see’s the machines don’t fall into any category, however I am deploying to the unknown collection, and these machines are all NIB. I don’t think it couldn’t be a driver issue if it seems to be completing the task sequence fine on other machines. I have already tested this deployment with 4 Surfaces prior (gen1/2). I also ran into an issue with using the same NIC dongles yesterday, but I resolved this by adjusting my task sequence to not record the network information, IE: MAC. That issue was causing the machines to not even PXE, as they were active clients, not unknown.
  6. In our environment we deploy Windows 7 32/64bit and Windows 8 32/64bit, both Enterprise. I imported a .wim for each of these OS types, and then packaged all the base software I thought we would need. I allow our technicians (who each handle desktop support for their own sites) to create their own collections. How I kept this organized though is I added a folder for each technician, that gives them their own space to create test collections along with whatever folder structure they like. I have them create their own task sequences using whichever OS/Software packages are available for use. If they come to me with another software their site requires, I package them at my earliest convenience to make there jobs easier. Their method of deployment is usually pxe and deploying to the "unknown" collection. The reason I leave it to them is because some of the hardware in our environment is older, and may not support Windows 8 as well as the newer machines. The poor decisions they make force on more work for them so there is usually no issues. Let me know if you have any more questions.
  7. So I initially started deploying updates via SCCM packages. This works great for my machines that have the client installed, and are communicating correctly. In our environment we are currently using Deep Freeze on some machines so in those cases the client isn't correctly installed yet, thus these machines do not get updates. What I did is I configured WSUS as an update point to allow the techs to manually check for updates, and download from the server rather than MS Update. Well, 500GB later, I decided maybe I would change this setting so the machines look to WSUS for approval but still download offline. If I change this and delete the folder where all of these updates are, is this going to cause issues? I am trying to keep the drive size down on this server so I can do daily backups. Any input?
  8. I have my server running on a HyperV VM with 4 v-cores and 24GB of ram? I have my sql set to 8196/10000. Initially my vm only had 16GB of ram which is why I set it at that level. I thought maybe bumping this up would increase performance but then checking tasks, I see sql isn't even using that? I had a Microsoft Rep come out and help configure part of this server for OSD, and he told me that I was surpassing minimum requirements for a less than 25000 client scenario. Anyone else running a similar configuration? If so, what is your experience with running the console remotely? Thanks for the help thus far!
  9. I also am having these issues with my server deployments. I am still in the testing phase for server deployments so this more of a concern at this point rather than an issue, but I look forward to hearing some responses.
  10. [1, PID:19740][02/03/2014 15:55:27] :System.Management.ManagementException\r\nNot found \r\n at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) This is in the log several times, but only one 2 dates at separate times. I did a quick search online and it appears people who were not able to connect to their server were receiving an error similar. I am able to connect, but it is just very sluggish.
  11. Has anyone run into issues with their console running sluggish? I notice on the server it's not too bad, but my techs are having issues when the client is running from their local machines. We have a 10gb backbone and a gig from the switch to their machines so I don't believe it's a bandwidth issue. Any advice on things to check and improve functionality? I was helping a technician the other day with making a driver package for training and the console must have crashed 3 times in a row from not being responsive. Thanks in advance
  12. Ok...Thats was I was thinking of doing, just didn't know if there was for sure a setting that I missed. Thanks for the insight!
  13. If anyone has insight on this is would be great. I ran through the ADR Wizard again, and I'm not finding the section to make the deployment Available/Required. I also checked the Deployment Package to verify that this didn't need to be configured after the ADR was configured. Has anyone else run into this?
  14. That is exactly what I need, but when I am running through the steps to create my ADR's I am not seeing this option like I do with software deployments. Am I missing something?
  15. What would be the downside to just updating your base .wim in the console? I was running into issues with slow deployment times due to 140 updates also, but updating my .wim's corrected this for me.
  16. In my environment, we previously had 2 WSUS Servers, Forefront Boxes, and SCCM servers at each site. We were able to replace this with 1 SCCM 2012 box, and it has been great. I have never worked with this product prior because I am new in my career, but I have learned in the last few months just how powerful of a tool it is! My latest struggle has me changing my software update plan daily. I created a collection for my W8, W7, and XP machines, that auto populate based on OS and OU, Auto Deployment Rules are pushing out correctly to these machines and they are all happy. Servers are where my issue lies. Prior to implementing this new SUP, we had servers reach out to the WSUS for servers UP, download updates, and await approval/install. This means once a month we log onto all of our 150ish servers and click install, reboot our VM's, update our hosts, and reboot them. I was hoping to set something up similar to this with SCCM by pushing updates via Software Center, like I do with my client machines, but instead of silently auto-installing, I just want them to be available? I can't seem to get this functional in a test enviornment. My next thought was, maybe I can just push the updates to install, and then when we are ready to finalize, just reboot the servers after hours, so we could skip the "wait for install and then reboot" process we currently go through. My only concern is that with my current deployments to clients, we have reboots suppressed, we aren't using a maintenance window, but after updates, the machines still say they need to be rebooted within 7 days. This seems like a lot of time, but in my environment we only have 3 network/server techs for all 20 sites, so sometimes we are late on our updates due to other pressing issues. I can not have Sharepoint, Exchange, or our main DC's go down unexpectedly. Anyone have any insight on how I can achieve one of those two scenario's? I would also like to hear if anyone has a better way of managing this. Thanks!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.