Jump to content


Embalmed

Established Members
  • Content Count

    3
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Embalmed

  • Rank
    Newbie
  1. Not really helpful to your situation, but I did put something together, and I'll try to explain how I did it. For planned hardware replacements, created two collections. The first collection (stage1) runs a required deployment of a script that gathers data from the device. This would be typical stuff we embed on the device to describe the department, business unit, and any other data we want tied to the device. The second (stage other?) is an available usmt backup task sequence, meant for the tech delivering the new device. When it is time to replace schedule the replacement of the device, the analyst will add it to the stage1 collection. It will gather the info, and condense into an XML The tech building the baremetal install will have an option in the naming script, which will import the data gathered in the XML. At that point, the task sequence will have the old and new PC names. Towards the tail end of the task sequence, it will send a remote command to another box to create the USMT entry and archive the XML (so other people will not try to build that replacement twice). The task sequence will add the new device to another collection that makes the restore available to run a usmt restore task sequence. The day before the replacement, a tech can add the doomed computer to the stage2 collection. When the replacement begins, the tech logs in, starts the backup through the software center and powers down. Plug in new computer, and do a restore through the software center. Not a perfect routine, but is linear enough I can tell people what to do. The problems I had in test was the guy doing my testing thought he would be clever and do step 4 while the baremetal install was running. Typical behavior if the entry isn't made, is to make the target the same PC. This doesn't appear to be capable of being manipulated, and after discussion with my peers I suspect this is a security feature against hijacking other users' data. Ad-hoc machines will likely just need a manual usmt process to an onsite storage, which we do have something for win7 like that but we'll need to overhaul it for 10. For the record I am going to need to edit this later... I haven't mastered this whole "communication" thing yet.
  2. Howdy, I hope this section of the forum gets enough visibility. My understanding is, that the normal flow if I want to run USMT backup through a task sequence and to restore to a different computer, I would need to pre-emptively define the source and destination. My searches have all returned very limited results and I am hoping there are more options. I'd like to be able to run a backup task sequence, either available or required, grab a random PC, and run a restore task sequence to install the data. I'm not pretending the software would know what the destination would be without telling it, but I'd like to be able to direct it without knowing the destination prior to the backup. The other scenario is harvesting data after failed remote wipe-reloads upgrading from 7 to 10. Without a doubt there will be hardware failures, and saving face may involve simply doing a hardware swap. So far I played with the OSDStateStorePath variable, hoping I could just steal the known path. It didn't appear to work at all, I suspect because it knew where it was supposed to go any my intended destination wasn't that. Am I playing the impossible game?
  3. Hello, I've been dealing with a problem that I have considerable trouble searching for. My Windows 10 task sequence will just pause, and hang for approximately 50 minutes. It will be during package installations, downloads will be clean, they will start and then go dormant. The time usually is about 50 minutes in duration, and I initially thought it was a specific package. When I moved it to a different position in order, the issue moved to another install. So far I have only seen this once per task sequence, and it only seems to be during packages. They will download fine and then start the install. I've tweaked power settings and I am running high performance during the sequence. I can watch the screen, sometimes it wakes up if I just hit a key on the keyboard or open and close the console window. Nothing really seemed to jump out on the logs. I was wondering if someone had an idea where to look. Editing for more detail, here goes. Currently our production image is the typical base image with the typical "must haves" to work in our environment. The front end for the task sequence will present a choice of configurations, which in turn adds the completed machine to a more customized collection. Licensed apps will have their own collections, one app per collection, etc. After the task sequence, detection logic would kick in and install the licensed apps and whatever else was assigned. Typically we put Lync 2013 in the task sequence, so everything gets it, but that presented an issue with machines that received the full Office install. All sorts of hurt happens when we install office on top of a machine with Lync. Under previous licensing agreements, we had office in the task sequence and it just stacked the order appropriately. The solution for a while was to have the Office install strip Lync and then an open deployment would come back and reinstall. This works, but not cleanly, and techs would end up sitting there hammering the config manager actions to force patching and installs. It wasn't terribly difficult to build a script to intercept these choices and set a dynamic variable to trigger an install mid task sequence. There wasn't a lot of documentation out there, but it wasn't hard to figure out. The dynamic variables are triggering application installs. What I have found out so far, is that package installs downstream of this are VERY picky. The simplest packages (copy files) appear to run through without any incident. When it hangs, it seems to pick the same packages, and it does it once per TS. The behavior is shown as "Waiting for job status notification..." for 50 minutes as mentioned before. Interacting, hitting enter, space bar, will seem to get the ball rolling. Moving the dynamic install and lync to the tail end of the task sequence has worked in a few tests. I will continue to pursue this, and I will also prepare for the contingency of dumping the dynamic variable and just making individual installs for each version and gate them with a single task sequence variable. I was hoping to make something easily scaled, but having an anchor that forces things to the end feels like I am just pushing it off till later.
×
×
  • Create New...