Jump to content


anyweb

Root Admin
  • Posts

    8,673
  • Joined

  • Last visited

  • Days Won

    339

Everything posted by anyweb

  1. the only way I've found is to script around it on the client and get that script onto your autopilot images somehow..., or get your proxy guys to add a transparent proxy for it, or use Wi-fi connections that bypass the proxy, there probably are some other abilities but for now that's what we have
  2. update: there's a hotfix released and i've blogged about how you can verify if you have the issue and explained what the hotfix does... https://www.niallbrady.com/2021/07/28/hotfix-available-for-2103-bitlocker-policy-storm/
  3. the script is downloadable still (scroll up this page for the link...), you just need to be a logged on member to download it, so please try again
  4. you mentioned subnets, and that's not advisable, have a read of Jason's old blog post here to get some ideas https://home.memftw.com/ip-subnet-boundaries-are-evil/
  5. does this have anything to do with hosting a KMS server ? if not, please raise a new topic, thanks
  6. Introduction This is part 8 in a series of guides about cloud attach in Microsoft Endpoint Manager, with the aim of getting you up and running with all things cloud attach. This part will focus on enabling the compliance policies workload. This series is co-written by Niall & Paul, both of whom are Enterprise Mobility MVP’s with broad experience in the area of modern management. Paul is 5 times Enterprise Mobility MVP based in the UK and Niall is 11 times Enterprise Mobility MVP based in Sweden. In part 1 we configured Azure AD connect to sync accounts from the on premise infrastructure to the cloud. In part 2, we prepared Azure resources for the Cloud Management Gateway, in part 3 we created the cloud management gateway and verified that everything was running smoothly. In part 4 we enabled co-management. With co-management, you retain your existing processes for using Configuration Manager to manage PCs in your organization and you gain the additional advantage of being able to transfer workloads to the cloud via Endpoint Manager (Intune). In part 5 we enabled the compliance policies workload and reviewed how that affected a co-managed computer. In this part we will enable conditional access and see how that can be used to deny access to company resources. In part 6 we configured conditional access and used it to deny access to company resources unless the device was encrypted with BitLocker. In part 7 we showed you how to co-manage Azure AD devices. In this part we'll enable Tenant Attach and take a brief look at it's features. Cloud attach - Endpoint Managers silver lining - part 1 Configuring Azure AD connect Cloud attach - Endpoint Managers silver lining - part 2 Prepare for a Cloud Management Gateway Cloud attach - Endpoint Managers silver lining - part 3 Creating a Cloud Management Gateway Cloud attach - Endpoint Managers silver lining - part 4 Enabling co-management Cloud attach - Endpoint Managers silver lining - part 5 Enabling compliance policies workload Cloud attach - Endpoint Managers silver lining - part 6 Enabling conditional access Cloud attach - Endpoint Managers silver lining - part 7 Co-managing Azure AD devices Cloud attach - Endpoint Managers silver lining - part 8 Enabling tenant attach Tenant attach first showed up in Technical Preview 2002.2, but was released in ConfigMgr 2002 which you can read about here. You can think of tenant attach as being a way to give your Endpoint Manager admins access to ConfigMgr actions/data via the MEM console (login to your tenant at https://aka.ms/memac) without needing to do it via the ConfigMgr console. The prerequisites The user account needs to be a synced user object in Azure AD (hybrid identity). This means that the user is synced to Azure Active Directory from Active Directory. For Configuration Manager version 2103, and later: Has been discovered with either Azure Active Directory user discovery or Active Directory user discovery. For Configuration Manager version 2010, and earlier: Has been discovered with both Azure Active Directory user discovery and Active Directory user discovery. The Initiate Configuration Manager action permission under Remote tasks in the Microsoft Endpoint Manager admin center. For more information about adding or verifying permissions in the admin center, see Role-based access control (RBAC) with Microsoft Intune. Note: In case it’s not clear above, you need to configure Azure AD Connect to sync your on-premise users to the cloud for the user actions to succeed. You also need to go through the Azure services in ConfigMgr and configure cloud management to sync Azure Active Directory User Discovery. Step 1. Create a collection This is an optional step, but helps you to keep track of which devices are Tenant Attached. Create a collection called Tenant Attached, you will use that collection to populate your tenant attached devices. Once created, place one or more devices into the collection. Step 2. Enable tenant attach In the ConfigMgr console, select the Administration node and expand cloud services, select Co-management (2103 or earlier) or based on what we saw in the recent technical preview (Technical Preview 2106) select Cloud Attach (2107 or later). Select CoMgmgtSettingsProd, right click and bring up the properties. In Co-management properties, click on the Configure upload tab. Next, place a check in the Upload to Microsoft Endpoint Manager admin center checkbox, and select a collection, for example use the Tenant Attached collection we created in step 1, Note: If you select All devices managed by Microsoft Endpoint Configuration Manager then all devices (including servers) will show up in the MEM console. Next, deselect the Enable Endpoint Analytics for devices upload to Microsoft Endpoint Manager. And finally click Apply. When prompted to authenticate to Azure services, enter the credentials of your Global Admin account for the applicable tenant. After correctly entering your credentials, the changes will be applied and you can review the success or failure of your actions via the CMGatewaySyncUploadWorker.log Step 3. Verify upload of data After a device is added to the target collection, you can look at the CMGatewaySyncUploadWorker.log to verify that it uploads data for the number of records you added. So if for example you add one computer to the Tenant Attached collection, then it'll state "Batching 1 records" as shown below. This will only happen when it detects a new device, in the next upload (15 minutes later) it'll return to Batching 0 records and so on unless of course new devices are detected in the collection. This upload of data occurs every 15 minutes. In the below screenshots, all highlighted devices are tenant attached and are in the Tenant Attached collection. Next, login to your tenant at https://aka.ms/memac this will display your devices. After the data is uploaded from ConfigMgr, check devices in Microsoft Endpoint Manager and depending on the type of device you'll see one or more devices matching that device name. In the first example, we have a device that is shown with two records, one is listed as co-managed and the other record as ConfigMgr. That record is tenant attached. The Managed by column will denote how the device is managed and tenant attached co-managed devices (hybrid azure ad joined) may have a second record where it states managed by ConfigMgr. We saw this repeatedly with this specific client, even after clean installing Windows 10 on it...the client version in this particular case was CM2103. If it's an Azure AD joined device that is also co-managed (as we described in Part 7) then the managed by column will state Co-managed and yet this device will have only one record. Lastly if the device is merely managed by ConfigMgr (not co-managed, not azure ad joined) then it will show up with one record. Step 4. Looking into tenant attach features Now that we can identify the different types of devices that are tenant attached, let's take a look at the power of tenant attach. If we look at the Azure AD joined, co-managed device which we deployed in part 7, we can see that the following additional capabilities are now available by enabling tenant attach and adding this computer to that collection so that the device becomes tenant attached. The following are available (in preview): Resource explorer Client details Timeline Collections Applications CMPivot Scripts in addition, you can now trigger the following actions Sync machine policy Sync user policy App evaluation cycle In the MEM console, the tenant attach abilities are highlighted below in red. Below you can see the Timeline feature and some of the data it can provide. To grab more data, click the Sync button and then refresh the screen. And here's a quick look at CMPivot Resource explorer is chock full of data Conclusion Using Tenant attach gives your admins more power to do ConfigMgr actions via the MEM console without needing to even install the ConfigMgr console.
  7. 1. that's normal see https://docs.microsoft.com/en-us/mem/configmgr/core/clients/deploy/assign-clients-to-a-site 2. I wouldn't setup boundaries using subnet, that's probably going to cause you issues, use IP ranges instead for your boundary definitions. Clients should download content from the distribution point closest to them (based on boundaries) or from fallback dp's if you have configured that.
  8. what have you tried and did it fail ? why not raise your own topic and explain what you've tried and what you want to achieve...
  9. if you add a restart computer step before this one, does it make any difference ? did it ever work for you ? during OSD is the computer in a collection targeted with these updates ?
  10. Introduction In a previous blog post I explained how to sign up for a webinar series and by doing so, learn from industry experts and Microsoft MVP’s about how and why they use tools like Tachyon from 1E to make things work better for your users, including how to deal with slow endpoints, or how to deal with apps that crash or for todays blog post, how to deal with those annoying admin requests. I will blog about each episode in the webinar series and link them here for your perusal. Episode 1. How to find and fix Slow Endpoints Episode 2. That crashy app Episode 3. Dealing with annoying admin requests Security. Love it or hate it, without it we’d all be in a worse situation. However security mandates as a best practice that the user logged on to Windows should be a standard user and not a local administrator. Why ? because that helps thwart the spread of damage to the operating system from running files that could overwrite operating system kernel files for example, or simply to keep root kits or viruses in check. Bad software can do bad things especially if you are a local administrator. That said, most users will need to be able to install legitimate software or configure things that require local administrator permissions on their computer, so how can we deal with that in a seamless, automated way with Tachyon. “Maybe you need an application for a demo, in 30 minutes.” Software Installation Requests are the probably the most common reason why people request admin elevation. Here are some ways that people typically deal with local admin rights requests. Group Policy – It’s a bit of a legacy dinosaur. Not that granular. The downside is knowing did it apply to the right group, did you clean it up after wards. Local Admin Password solution – LAPS, giving out a password for the local admin account and risks associated with it, able to add others to the local admin group, security team not so happy about that. How does Tachyon deal with this problem ? Tachyon deals with this seamlessly and fast, but it doesn’t sacrifice security to enable this ability. It’s part of the Guaranteed state module, specifically the real time security broker (RSB) shown below. This is made up of three rules listed here. RSB: Disable Inactive RDP RSB: Remove ‘Own Machine’ Local Admin escalations once timeout is exceeded. RSB: Remove unauthorised local admins. Here’s an example of it how Tachyon deals with this from start to finish. This is broken down into a couple of actions which are security focused, in that the user must be whitelisted in order to be allowed local administrative privileges. Whitelisting an account/Verifying whitelisted account Adding that whitelisted account to the local admins group Whitelisting an account Below we can see a user (aneel) is logged on to PC0004 and we can clearly see that the user is not a member of the Local Admins group on that PC. In the Tachyon Explorer console, you can search for RSB and then select RSB Whitelist: <Action> user <UserName> to add (or remove) a user. Next, click on Edit (shown below with the red arrow) to add Parameters to your action. In the parameters section on the right side of the console, select the device name that you want that user to have local admin permissions on. Adding a user to local admin on PC0004 in Tachyon After clicking Perform this action the request is then validated and any alternative accounts needed to approve the request will be informed to approve it. After the instruction was approved you can see that the user has been whitelisted and all of this is in real time. Verifying whitelisted account To verify that the whitelist request has succeeded you can use the List Real-time Security Broker Whitelist action in Tachyon Explorer. and in an instant you can see that the user has been added to the whitelist. Adding the user to the Local Admins group Next, you actually add the user (aneel) to the Local Admins Group. In Tachyon Explorer use the RSB Command: Add user <UserName> to the Local Administrators group, ONLY ON HOST: <hostname>. After performing the action you can see that the user is added to the Local Admins group. The entire process took less than a minute to whitelist and then add the user to the local admins group including the secondary approvals. You can also set the amount of time needed, for example give the user 30 minutes of Local Admin time. What about Self service for the end user ? If you want to allow your users to do this on their own, to elevate on demand using self-service, it’s possible as long as they’ve been given the correct permissions/ability. We can deploy an app called “Escalate to local admin” via Tacyhon to a small subset of users whom we trust to use appropriately. Below we can see another user (Ataylor) is logged on to PC0005 and this user is not a member of the Local admins group. This user launches the “Escalate to local admin” app so that they can self-service (with 2FA) the action themselves. and after clicking Go and satisfying the security prompts, the user is now added to the local admins group. Users behaving badly What about users adding other accounts without permission, below we can see a user that was granted local admin permissions has decided to add another user (sneakyadmin) to the local admins group. But no sooner than they click Apply, they are informed that the unauthorized action was denied. This is because the user added was not authorized via the Tachyon platform, and was instantly denied, not only that but the action has been logged and undone. going back into the Local Admins group you can see that the sneakyadmin account is not listed any more. Reporting on actions If you look in the Guaranteed State rules which drive this you can see that the action has been remediated, this is revealed under Report, Remediations. Conclusion Using Tachyon to provide admin credentials using security focused methods is easy and painless, yet retains useful features such as auditing, whitelisting and the ability to deny unapproved users. That’s it for this blog post, I hope to see you in the next one. In the meantime, I’d suggest that you sign up for the next DEM webinar, it’s free, tell them Niall sent you . And for those of you who want to see previously published episodes on youtube please click here. DISCLAIMER: The contents of this article are the opinion of the author and have been written from an impartial standpoint; however, 1E may have reimbursed the author for time and expenses for undertaking the findings and conclusions detailed in the article.
  11. this sounds like the same problem (random) reported by various customers with ConfigrMgr 2010, I didn't see any fix for that, so you could try raising a ticket with Microsoft and see what they say, they claimed then that they couldn't reproduce it
  12. thanks a bunch for the info, i'll include a note linking to this at the top of the blog post ! I've also tweeted the info and posted it on my linked-in page
  13. do you have lots of lines in red in your SMS_Cloud_ProxyConnector.log ? did you review the link i included above to verify that you have not missed a step? have you ever had this working ? you don't need to configure any ports in Azure.
  14. Text=ERROR_WINHTTP_NAME_NOT_RESOLVED dns or network issues ? have you reviewed the required ports and other configuration that we've blogged about here ?
  15. Introduction By now we should all be familiar with Windows Autopilot and how it is used to provision new computers, as explained below in Microsoft's diagram. For every new computer delivered via the Windows Autopilot process there's usually an old or obsolete computer waiting to be retired or re-sold. Those old computers still have life left in them and are frequently sold back to the vendor who sold them as new 3 years previously, either to be re-used or re-sold around the world. However those old devices may still contain sensitive company data on them and you want to protect that from prying eyes. Today your company may have an existing process where on site support staff clear the BitLocker protectors from the TPM chip to make extraction of that data as difficult as possible. The Retire My PC app aims to provide self-service ability to the end-user to retire their old PC quickly, easily and with minimum fuss and of course, to do so in a secure manner thereby protecting your companies data. In this blog post I'll guide you through setting it up in your own environment. The Retire My PC app. This app has the following features: stops the ConfigMgr client agent service stops the MBAM agent service rotates the BitLocker key (optional) WIPEs the BCD registry entries (optional) joins a workgroup clears the TPM protectors adds a record of all this to Azure Tables emails the log to a support inbox Requirements: Before you get started please ensure that you've already setup a Sendgrid account (for sending emails) as I've explained in Step 4 of this blog post. In this blog post you'll do the following actions: Create an Azure Resource Group Create a storage account Copy access key connection string Create an Azure table Create a function app] Configure the function app settings Create some httptriggers deploy the app via ConfigMgr test and verify on a computer Step 1. Create a Resource Group login to https://portal.azure.com and click on Create a resource, in the search field type in Resource group and select Create Resource Group. Give it a suitable name like RetireMyPc and select a suitable region. Step 2. Create a storage account In the newly created resource group, click on the Create button, select Marketplace and search for Storage Account using the text field provided. when you find Storage account, select it and then click Create. In the Create a storage account wizard, give it a unique name, select the resource group you previously created and finally, select your applicable region as shown below (highlighted in yellow). When done, click on Review + create followed by Create. Step 3. Copy access key connection string After creating the storage account, select it and then click on Access keys in the Security + Networking section and then click on Show keys and copy the Connection string of key1 as we'll need it later. Step 4. Create a table In the Storage account you created in Step 2, click on the Tables under the Data storage heading in the left pane. Next, click on + Table to add a new table. give the table a name like devicestatus and then click OK to add the table. Next click on Storage Explorer (preview) and select the newly created devicestatus table. If it appears blank (appeared blank for me with Google Chrome) try using Mozilla Firefox or another web browser. As you can see there is no data in the table yet. Step 5. Create a function app In the RetireMyPc resource group, click on + Create to create a new resource, search for Function App and go ahead and Create a function app. In the Create Function App wizard, make sure to select the RetireMyPc resource group in the resource group drop down menu, then make sure to name the Function App Name, next select PowerShell core as the Runtime stack and finally select an applicable region. and then click on Review + Create before clicking on Create to complete this wizard. After the Function app deployment is complete your Resource group should look something like this. Step 6. Configure function app After creating the RetireMyPc Function App, select Configuration from the Settings menu on the left. Click on + New application setting. Give the new setting a suitable name like RetireMyPc_setting and paste in the Connection string you copied in step 3. click OK to the add/Edit application setting wizard, it should now appear in the list of application settings. finally, click on Save application setting and then click on Continue when prompted. Step 7. create some HttpTriggers In this step you will create 2 Http Triggers for use by the RetireMyPc app. Select the RetireMyPc Function App you created above, and select Functions in the Functions menu, then click on + Add. Create a new Http Trigger from the template provided and click on Add when done. After the http trigger is created, click on Code + Test and paste in the following Powershell code to replace what was there previously. ####################################################################################################################################### # use this code in a http trigger as part of a function app # for more details see https://www.windows-noob.com/forums/topic/22489-retire-my-pc-a-self-service-app-to-secure-company-data-on-old-computers/ # Niall Brady,2021/06/27 ####################################################################################################################################### using namespace System.Net # Input bindings are passed in via param block. param($Request, $TriggerMetadata, $inputTable) $Tenant = "windowsnoob.com" $triggerName = "ADD data TO Azure Table" # Write to the Azure Functions log stream. Write-Host "PowerShell HTTP trigger function processed a request." # Interact with query parameters or the body of the request. $ComputerName = $Request.Query.ComputerName if (-not $ComputerName) { $ComputerName = $Request.Body.ComputerName } $UserName = $Request.Query.UserName if (-not $UserName) { $UserName = $Request.Body.UserName } $Model = $Request.Query.Model if (-not $Model) { $Model = $Request.Body.Model } $Manufacturer = $Request.Query.Manufacturer if (-not $Manufacturer) { $Manufacturer = $Request.Body.Manufacturer } $Serial = $Request.Query.Serial if (-not $Serial) { $Serial = $Request.Body.Serial } $DateRetired = $Request.Query.DateRetired if (-not $DateRetired) { $DateRetired = $Request.Body.DateRetired } $Status = $Request.Query.Status if (-not $Status) { $Status = $Request.Body.Status } $a = Get-Date $body = $body + "$a ------------------------------------`n" $a = Get-Date $body = $body + "$a Starting the following trigger: '$triggerName'.`n" $a = Get-Date $body = $body + "$a Connected to tenant: '$Tenant'.`n" if ($ComputerName) { $a = Get-Date $body = $body + "$a Adding this computer to Azure Tables: '$ComputerName'.`n" #} #fix the date $NewDate = $(get-date($DateRetired) -UFormat '+%Y-%m-%dT%H:%M:%S.000Z') $a = Get-Date $body = $body + "$a Get next row key based on the last entry in the Storage Table....`n" $nextRowKey=$([int]$(($inputTable.RowKey|measure -Maximum).Maximum)+1) $a = Get-Date $body = $body + "$a nextRowKey = '$nextRowKey'.`n" # this will be the row key that we insert in this operation # Input row into DB $tableStorageItems = @() #insert the NEW data $tableStorageItems += [PSObject]@{ PartitionKey = "1" RowKey = $nextRowKey.ToString() ComputerName = $ComputerName UserName = $UserName Model = $Model Manufacturer = $Manufacturer Serial = $Serial DateRetired = $NewDate Status = $Status } # insert the data $Result = Push-OutputBinding -Name outputTable -Value $tableStorageItems $body = $body + " Adding the data returned (usually blank...): $Result `n" } $a = get-date $body = $body + "$a Exiting Azure function.`n" $a = Get-Date $body = $body + "$a ------------------------------------`n" # Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = [HttpStatusCode]::OK Body = $body }) Note: In the code above, replace $Tenant = "windows-noob.com" with your tenant name and then click on Save. Step 8. Integrate the HttpTrigger Select the HttpTrigger and click on Integration and then click on + Add input under Inputs. In the window that appears on the right of your console, for Binding Type select Azure Table Storage. For the Table name enter the name of the table created earlier which was devicestatus. For the storage account connection select the RetireMyPc_setting from the drop down menu as shown below. click on OK. Repeat the above process for the HttpTrigger Output. Once done, your HttpTrigger1 integration should look like this with both Azure Table Storage (inputTable) and Azure Table Storage (outputTable) configured. Step 9. Test HttpTrigger In the Code + Test section, click on Test/Run and paste in the following Input. { "ComputerName": "MYCOMPUTER", "UserName": "niall", "Model": "Surface Book 2", "Manufacturer": "Microsoft", "Serial": "1234567890", "DateRetired": "2021-06-27T14:20:06.000Z", "Status": "OK" } If you did everything correctly you should see the following type of output in the right pane. Notice how it states your tenant name and "Adding this computer to Azure Tables:" Finally, check the Azure Tables in your storage account and you should see the data you just added. Success ! Step 10. Add/Configure second HttpTrigger Now that you know how this works, please add an additional Http Trigger called HttpTrigger2 with the following code: ####################################################################################################################################### # use this code in a http trigger as part of a function app # for more details see https://www.windows-noob.com/forums/topic/22489-retire-my-pc-a-self-service-app-to-secure-company-data-on-old-computers/ # Niall Brady,2021/06/27 ####################################################################################################################################### using namespace System.Net # Input bindings are passed in via param block. param($Request, $TriggerMetadata, $inputTable) $Tenant = "windowsnoob.com" $triggerName = "READ data FROM Azure Table" # Write to the Azure Functions log stream. Write-Host "PowerShell HTTP trigger function processed a request." # Interact with query parameters or the body of the request. $nextRowKey = $Request.Query.nextRowKey if (-not $nextRowKey) { $nextRowKey = $Request.Body.nextRowKey } $CheckComputerName = $Request.Query.CheckComputerName if (-not $CheckComputerName) { $CheckComputerName = $Request.Body.CheckComputerName } $a = Get-Date $body = $body + "$a ------------------------------------`n" $a = Get-Date $body = $body + "$a Starting the following trigger: '$triggerName'.`n" $a = Get-Date $body = $body + "$a Connected to tenant: '$Tenant'.`n" if ($nextRowKey -and $CheckComputerName) { $a = Get-Date $body = $body + "$a Checking the following row: '$nextRowKey'.`n" $body = $body + "$a Looking for this computername: '$CheckComputerName'.`n" #} #Put all table rows into $table $table="" foreach($row in $inputTable){ $table+="$($row.PartitionKey) - $($row.RowKey) - $($row.ComputerName) - $($row.UserName) - $($row.Model) - $($row.Manufacturer) - $($row.Serial) - $($row.DateRetired)- $($row.Status) " } # print out the results... # $body = $body + $table #validate section #$body = $body + "Validate: $($($inputTable|Where-Object -Property RowKey -EQ 12).ComputerName)" $a = Get-Date $found = $($($inputTable|Where-Object -Property RowKey -EQ $nextRowKey).ComputerName) $body = $body + "$a ComputerName found: $found`n" if ($found -match $CheckComputerName) { $a = Get-Date $body = $body + "$a FOUND a MATCH :-)`n" } else { $a = Get-Date $body = $body + "$a sorry, did not find a match :-(`n" } } $a = get-date $body = $body + "$a Exiting Azure function.`n" $a = Get-Date $body = $body + "$a ------------------------------------`n" # show the output to the browser...Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = [HttpStatusCode]::OK Body = $body }) and configure the integration the same way as you did with the first trigger. The second trigger can be tested with a different input, pasted here. { "nextRowKey": "1", "CheckComputerName": "MYCOMPUTER" } Confirm that it finds the previously added data before moving on, notice how it states "FOUND A MATCH", this proves it is working. Step 11. Download the script Note: The SecureWipe.ps1 script used in this guide can only be downloaded by logged-on members of windows-noob.com, so if you haven't already done so, create an account and login. Download the RetireMyPC Powershell script (txt file) and save it as SecureWipe.ps1 SecureWipe.txt Step 12. edit the script The script won't work until you've made some edits. In Azure, locate HttpTrigger1 and click on Get Function URL, copy it. Edit line 428 of SecureWipe.ps1 and insert the copied URL of httptrigger1 Edit line 463 for the copied url of httptrigger2 Edit line 595 and insert your sendgrid API key Modify the following lines otherwise i'll be getting your emails... Step 12. Deploy the app via ConfigMgr Locate ServiceUI.exe from the C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x64 folder in MDT files as explained in Step 2 here. For licensing reasons I cannot include it here. Create a package/program with the following text ServiceUI.exe -process:explorer.exe %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -WindowStyle Hidden -ExecutionPolicy Bypass -File securewipe.ps1 Your finished package (or app, it's up to you to decide which...) should contain at least securewipe.ps1 and ServiceUI.exe (the install.cmd and uninstall.cmd are not needed here). Deploy the package with a purpose of AVAILABLE so that it shows up in your users Software Center Job done, go ahead and start testing it, to see the app in action please review my video here. Troubleshooting If you want to test this in a non-destructive way locate the following variable $BrickTheDevice and set it to $false The app logs to C:\Users\<USERNAME>\AppData\Local\Temp\win.ap.securewipe.log
  16. i'd just run it and then attempt to install the portal again
  17. i think you answered your own question, have you tried hyper-v ? why have 2 nics connected to the ConfigMgr server ?
  18. i notice in your first screenshot that it is warning you about the sheer number of SUG's you have, do you have any idea how many there are ? any reason why you are not cleaning them up ?
  19. PKI is not needed for BitLocker Management, but it's recommended, you can still use e-http, however be aware that come October 2022, http will be deprecated so the move to HTTPS should start now https://www.niallbrady.com/2021/03/12/prepare-for-http-only-client-communication-depreciation-in-configmgr-31-10-2022/ I'd recommend you fix your PKI issues and continue down that road, hire a pki consultant to assist
  20. ok and all your component status are ok ? or are there issues reported there ?
  21. first things first, converting ConfigMgr to HTTPS shouldn't break things unless it's not done right, so were you sure that the clients had the right certs in place before making the switch?
  22. what happens if you click OK or type the name of the software update group ?
  23. are you pxe booting a vm or a real computer ? what does the smspxe.log reveal when you are pxe booting, if you don't see the mac address of the device listed then something is mis-configured on the osd side, for example, did you deployed anything to the unknown collection(s), did you make the task sequence available to media&pxe ?
×
×
  • Create New...