Jump to content


Established Members
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by h4x0r

  1. I believe the reason is that the RemoteInstalls folder does not get removed simply when the WDS role is uninstalled...as a result, you have an overlap of files that are corrupt and the new ones you need to implement. Don't forget that you need to have both the x32 and x64 images distributed to your DP. Glad I could help!
  2. They do have USB ports, and I had forgotten to mention the USB NIC as a consideration as well. I have not had great success using USB NIC's in the past...as you point out, there will be a limited number of supported devices there. Thanks!
  3. Are you PXE booting these via legacy or UEFI? I have seen instances where if a device is PXE booted via legacy, but is operating with UEFI, that the format drive step can bugger things up and the apply OS step can't find the partition...but, network boot with UEFI and it formats correctly, and the OS is able to apply. Also, there is this thread regarding UEFI PXE booting issues.
  4. A colleague of mine had a similar issue with his configuration. I think we ended up solving the issue by removing WDS (un-check option on your DP to respond to PXE), renaming the RemoteInstall directory (RemoteInstall.old), *edit* re-enable PXE support on the DP, and then re-creating his boot images (he had MDT integration, so we used that). We did this for both x32 and x64 boot images.
  5. Good day everyone, I would like to pick the brain of the proverbial community think-tank here on the forums. The product We are looking at purchasing a number of (what I am calling) WInBooks for students (Dell 3180's, Lenovo N22, etc). These are basically Windows 10 versions of their ChromeBook counterparts. 11" screens, Celeron/Pentium procs, 4GB RAM with 64/128GB SSD's...and wireless only. The IDEAL solution Generally speaking, I think the solution to provisioning these would be using InTune, but that is not on the table due to budgetary reasons. As such, I do not have CM setup as an MDM. The more likely solution Use WICD and create a provisioning package that we apply when the new laptops are first turned on...apply SharedPC settings, wireless profile, domain join, etc. The concerns/problems We like to image things that come in for a number of reasons...cleans off bloatware from the mfg's, install organizational specific software, customize other settings, etc...however, with most of these devices only having wireless cards and no dedicated NIC, it takes a couple more hoops to jump through in order to image them (currently just using a USB drive). Using WICD to create a provisioning package that we apply at first boot is the simplest solution, but it leaves all the mfg's software installed, as well as trial software that gets in the way. Using a USB drive for OSD gives us a clean laptop and software we want installed, but still leaves us needing to do something about the wireless profile and domain join. Theorycrafting My initial thought was to use DISM to apply a provisioning package(s) at various steps in the TS in order to try and automate the process via TS. For example, first apply a package with the wireless profile immediately after the image is applied, and then during the software installs, apply another package with the domain join (no idea if this would work yet, as I have yet to test it)...this would be contingent upon the wireless profile being applied first, and actually connecting to the network during the Install Applications portion of the TS...and I have no idea if that would work. What else should I take into consideration? Has anyone else dealt with something similar? Perhaps there is a more simple solution that I am missing (can't see the forest for the trees)? All input is appreciated! Thanks!
  6. I'm in 100% agreement with you there There are a number of school districts in our state who have inherited a ConfigMgr setup, either due to being a new hire or someone leaving the department, and they get handed management of CM...I am always telling them: test, test, and then test again.
  7. Thanks for this write up, Niall...I was actually getting ready to start work on this very project here This is particularly useful in my scenario, for a number of reasons. I work for a K-12 school and the AD/network structure is...different, from most places. I do not have access to my WSUS server, AND it sits in the DMZ behind my internet filter (don't get me started). The entity controlling the approved patches for Windows 10 has them on a delayed schedule due to the fact that they test all patches first to make sure that they are compatible in our environment. I have the potential to stand up my own WSUS to use with CM, but the short version is that it isn't happening in the immediate future. So in short, I needed a way to upgrade all my 1511 stations to 1607 (or even the creators update) in a controlled manner, rather than letting WSUS hand out the updates and simultaneously bottleneck my internet filter. Thank you for your work on this!
  8. Thanks Tregelen...you mention different approaches...do you mean thick vs thin images being deployed?
  9. Thanks Rocket Man...I agree that 4min seems ludicrous...I'm sure there are things we could do to better optimize our OSD...but there's no way I could bring it down to that. Do you mind if I shoot you a PM with a couple other questions?
  10. Good day everyone, Let me provide some background...I work for a school district, and we use CM for imaging, software deployment, endpoint security, reporting, etc...fairly standard in my book. In a recent session with some other districts, another tech mentioned that they were able to image a workstation in 4 minutes using their FOG server (FOG is a free imaging tool, runs on Linux...look it up if you are curious). Now, there are a lot of factors that could come into play there...network congestion, SSD vs SATA drive, etc...but part of me is sitting here going, there's no way that they have booted, downloaded an image, applied image, and gone through all the "first boot" setup, and be ready to login in 4 minutes. Obviously I have my own idea of what it means to image a workstation...for me it is the process start to finish, with the workstation ready to go at the login prompt. If we're talking about the time it takes to simply download the WIM file, then that's something else. So I'm curious...based on what I just said (start to finish, all software installed, and ready to login), how long does it take you to image a workstation with ConfigMgr? Please let me know if you use SSD's in your workstations or not. Thanks!
  11. Ha, gotta love that I mean, it is a fairly small inconvenience all things considered, but it makes me wonder what the real culprit is.
  12. Well I ran across some interesting information the other day...I forget why I was looking at this thread on reddit, but from what I can tell, this may be the issue: Hyper-V VM's need to Gen 2 for deploying Windows 10 successfully/correctly. I'm using the same VM that I have for years at this point, and hadn't even thought of changing/checking the VM generation. I updated to the 1511 ADK the other day and noted that I was still experiencing the issue. When I created a new gen 2 VM on my workstation, however, it worked like a charm. I hope this helps for anyone else who might run into this issue.
  13. Thanks for the info...I have it on my to-do list to get that upgraded, but have not had the time just yet. I will update to the latest versions of those and see if that makes a difference in my environment.
  14. Reviving a semi-old thread here...I'm also having the same experience with imaging my hyper-V VM with my captured Win10 image. I haven't had a chance to test this on a physical station however. Other thread that I've come across all share the same issue, but there is never a defined resolution. Has anyone made progress on this?
  15. I would start by checking your site and component status to see if there is anything that needs to be resolved there.
  16. That's the same behavior I was having, too. I would suggest extracting the installer (mine was like 68MB, but right at 1GB after extraction) and running the setup.exe with your switches that way. You can do that as either a package or application (Peter van der Woude has instructions on his blog to set this up as an application if you so wish). I never found out why I couldn't deploy it from a TS as the 68MB file, but I suspected it may have been some wierd caching issue. However, that's been the only app/package that I've ever had that problem with.
  17. Side note regarding .NET 4.5 update...you might need to extract the contents of the update and put those in a package/application, as I had trouble with the .NET 4.5 update running from a TS when just using the compressed installer. But yeah, definitely re-distribute those packages if you hadn't already distributed that content (I didn't think that migrating from 2007 to 2012 automatically distributes packages to your DP).
  18. Always check to make sure you have a successful backup before anything...but I suspect you'll be fine. https://social.technet.microsoft.com/Forums/systemcenter/en-US/f504a4ed-5ad4-44f0-8d32-e41aab0665ab/sql-server-security-mode-for-sccm?forum=configmgrsetup
  19. Yeah, in this case you might need to see what the installer is returning on an exit code for success or failure, then script something like writing to a reg key that can be used for application detection, if there is no MSI product code being generated. Otherwise, this will have to run as a package.
  20. That's 6 of one, half dozen another...at least, I didn't THINK you had to change your site code under that scenario...which is ultimately what the OP is trying to avoid. But with a side-by-side migration like that, I would be concerned about the ability to bring up the old server in the event that there's a problem with the new one (since that is apparently a concern that has to be met in that environment).
  21. I'm assuming that means you're having to build that detection into your scripting, and then returning certain exit codes based on success or failure...is that what you mean?
  22. If you shut down the old server, reset the object in AD, and then join the new server under the same name, then it will take the place of that AD object...THEN, if (for whatever reason) you have to shut down the new server and bring the old one back online, just do that same process to bring the old server back online...because, really, any issues you have with the new server are probably going to surface right away. I can say that I have effectively used this method at another site in which a SP install crashed and the site had to be restored. We ended up taking the opportunity to wipe the VM and install server 2012 R2, effectively overwriting the "old" server object in AD with this new one. After getting SQL squared away, we ran the site restore and bing, bang, boom...the site was up and running again. Overall, I would say that as long as you have a good backup (ie, not corrupt) that this is a VERY easy process.
  23. One giant zip might get tedious when you have application updates that need to get included for later iterations. It would probably be better (assuming you REALLY wanted to go the zip-copy-unzip-install route) to create individual zip packages to make updating them easier. It would also help with troubleshooting if something were to go wrong...but still...this seems like a lot of extra work, and you're losing application detection as well. Just my two cents
  24. So power off the old server and name your new server the same? Unless I'm missing something, you would just reset that computer object in AD and the backup would restore with the correct server name.
  25. If you are just needing to move it to different hardware, then do a site backup and restore on the new hardware. No need to create a new site. *edit Depending on your organization size, adding the secondary server may be something to look at, but the easiest thing will be to restore a backup onto the new server.
  • Create New...