Jump to content


anyweb

Root Admin
  • Posts

    8,810
  • Joined

  • Last visited

  • Days Won

    350

Everything posted by anyweb

  1. what type of account are you using to create this ? I just tried it now and i do see the option to create a new storage as you see here
  2. Introduction I've seen multiple posts on twitter recently where people showed how to retrieve data from a company device. The Retire My PC app (shown below) helps to add a stronger layer of protection to your corporate devices by deleting the Bitlocker recovery information from the TPM, before shutting down the computer, and it gives your users a Self-Service way of securing company data on old computers before handing them back. In case you haven't seen that blog post already, please familiarize yourself with the Retire My PC solution here. In this blog post I'll show you how you can verify that the user has retired their old device by running a script on their new device. Step 1. Add a new httptrigger In the Resource Group that you created in Step 1 of the first blog post, create a new httptrigger and paste in the following code. ####################################################################################################################################### # use this code in a http trigger as part of a function app # for more details see ... # Niall Brady,2021/10/05 ####################################################################################################################################### using namespace System.Net # Input bindings are passed in via param block. param($Request, $TriggerMetadata, $inputTable) $Tenant = "windowsnoob.com" $triggerName = "Check if user retired PC previously" # Write to the Azure Functions log stream. Write-Host "PowerShell HTTP trigger function processed a request." # Interact with query parameters or the body of the request. $CheckUser = $Request.Query.CheckUser if (-not $CheckUser) { $CheckUser = $Request.Body.CheckUser } $a = Get-Date $body = $body + "$a ------------------------------------`n" $a = Get-Date $body = $body + "$a Starting the following trigger: '$triggerName'.`n" $a = Get-Date $body = $body + "$a Connected to tenant: '$Tenant'.`n" if ($CheckUser) { $a = Get-Date $body = $body + "$a Looking for the following user name: '$CheckUser'.`n" #} #Put all table rows into $table #$found="" $table = [System.Collections.ArrayList]::new(); foreach($row in $inputTable){ #validate section # look for any rows where the user we are checking equals the UserName column... if ($CheckUser -EQ $row.UserName -and $($row.DateRetired) -gt $(Get-Date).AddDays(-14)) { $table.Add(@{ sUserName = $row.UserName; dateRetired = $row.DateRetired; sStatus = $row.Status; sComputerName = $row.ComputerName; }) #$found+="$($table[$table.Count-1].sComputerName) - $($table[$table.Count-1].sUserName) - $($table[$table.Count-1].sStatus) - $($table[$table.Count-1].dateRetired)" $found+="$($table[$table.Count-1].sComputerName) - $($table[$table.Count-1].dateRetired)" $body = $body + "$a FOUND record: '" + $found + "'.`n" } else {} # {$body = $body + "Did not find a matching record`n"} } } $a = get-date $body = $body + "$a Exiting Azure function.`n" $a = Get-Date $body = $body + "$a ------------------------------------`n" # show the output to the browser...Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = [HttpStatusCode]::OK Body = $body }) Step 2. Integrate the trigger with Azure tables In the newly created trigger click save and then click on Integration. You will be adding integration with the Azure tables created in the first blog post with the new trigger for both the input and output (marked in yellow below). For Inputs, click on + Add input then... configure the following settings: For Binding Type, select: Azure Table Storage For Storage Account Connection, select: RetireMyPC_setting For Tablename enter: devicestatus as shown below in yellow. Click OK when done. You should see it looking like this. Next, for Output table click on + Add Output and fill it in like so.... For Binding Type, select: Azure Table Storage For Storage Account Connection, select: RetireMyPC_setting For Tablename enter: devicestatus like this. Click OK when done, your httptrigger is now integrated with Azure tables. Step 3. Get the scripts Download the following files and extract somewhere useful. Note: you will need to login to windows-noob.com to download the scripts. Encode files.zip Step 4. Get ServiceUI.exe from MDT You'll need the ServiceUI.exe executable file to display user interfaces (UI) to end users when operating in SYSTEM context. To get the file, download and install MDT somewhere and navigate to C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x64. To download MDT click here. Copy the ServiceUI.exe file to your extracted Encode Files folder Step 5. Get the Win32 content prep tool Download the Win32 content prep tool from here. Copy the IntuneWinAppUtil.exe file to the root of your extracted scripts folder Step 5. Modify the scripts Modify the following script: win.ap.retiremypc_verification.ps1 get the funtion URL of the new httptrigger you created in step 1 by looking at this graphic paste that function URL into line 72 next edit line 145 to your preferences ok, now move on to script CreateScheduledTask_win.ap.retiremypc.verification.ps1 and edit line 41 next fill in the values of the encoded files into lines 53-56 so for example, here is before and after... if you don't know how to encode the files look at the encode.ps1 script and you'll figure it out, long story short, modify the path, then run the script, it'll generate 4 txt files and you need to use the contents of those txt files in these variables, note that you need to encode these files every time you make a change to their contents and then paste in the new txt for each file into the main script. Step 6. Create the intunewin package Browse to the folder containing your files and run the IntuneWinAppUtil.exe. Step 7. deploy a Win32 app Next you'll deploy the new package to your selected users. In Microsoft Endpoint Manager click on the Apps icon, select All apps, select + Add and then select Windows App 32 Point it to the win32app_target folder and select the previously created .intunewin file configure the app like so.. On the App information screen, enter the name etc... on the Program screen enter the following install command install_CreateScheduledTask_RetireMyPC_verification.cmd and set the install behaviour to System on the requirements and on the detection rules... .. lastly deploy it to your Windows Autopilot users azure ad group and the rest will take care of itself. Step 8. Verify the end result On a newly deployed Windows Autopilot machine, login and check the scheduled tasks folder, in there you should see your targeted user has a new scheduled task, this task is scheduled to run daily for a month starting 14 days after Windows Autopilot completes enrollment. You can wait 14 days or just run it by right clicking and choose Run. If the task detects that the user retired a pc in the last x days then the task will run and self-delete, and the user will not see any message, however, if the logged on user has no record of a computer in Azure tables in the last 14 days then the following message will appear This popup message will appear daily for the next 2 weeks (you can configure that via the scheduled task script), and can be 'fixed' by the end user either retiring their old pc OR via a help desk manually entering the details. That's it for this blog post, until the next one, adios !
  3. yes, see my blog post here > How can I dynamically install language packs and features on demand in an offline environment for Windows 10
  4. did you restart the server ? did you review the docs here ? https://docs.microsoft.com/en-us/mem/configmgr/core/servers/deploy/configure/about-the-service-connection-point
  5. take a look at my two posts here, they cover everything you need to convert to https, they'll cover a bit more than Justins excellent video, so do please verify you didn't miss anything How can I configure System Center Configuration Manager in HTTPS mode (PKI) - Part 1 How can I configure System Center Configuration Manager in HTTPS mode (PKI) - Part 2 also, keep in mind that certs can expire, and when they do you'll have issues, like this https://www.niallbrady.com/2020/08/16/how-can-i-replace-an-expired-iis-certificate-in-a-pki-enabled-configmgr-environment/ if you want to really test PKI is working then try pxe boot (operating system deployment), if it fails you'll see it failing quickly in the logs, and that'll be a clue that you've missed something, also, on PKI managed clients, your configmgr client agent should report that the client is PKI, like this...
  6. ok, then when you see that screen press f8 to bring up the command prompt before it reboots, then locate the x:\windows\temp\smstslog\smsts.log file and attach it here
  7. let's just focus on one problem at a time, your e-http setup, did you configure it like i said ? and are your roles all configured in http only or ?
  8. it's a bit unclear from your post but what is your actual goal here, are you trying to enable ConfigMgr in HTTPS mode (PKI) or are you trying to use e-http (enhanced http), or do you simply have client issues with invalid sms certs ?
  9. hi there, all the scripts are freely downloadable as long as you are a logged on member of windows-noob.com, which you now are, so please try again @Champ
  10. and what does it report when you evaluate the compliance of that configuration ?
  11. good info, can you show me your Configurations tab in the configmgr client agent...
  12. ok if the mbam client is not getting installed then there's something wrong with your policy settings, are you sure you've configured Client Management and set it to Enabled ?
  13. i mean, do the client versions correspond to the site version, if yes then let's figure out if the client is getting the policy or not, did you check on the configmgr client agent to see if the bitlocker policy you configured is listed ?
  14. Introduction If you haven’t already noticed I’m currently blogging about a series of DEM in 20 webinars from 1E and I’ve linked each one that I’ve covered below for your perusal. In today’s blog post I’ll focus on how to deal with that Change Management Success Rate Struggle. That’s a mouthful, but in a nutshell what it means is how can you cope with the onslaught of issues raised both pre and post change for a change management request. Every company has to deal with change management, possibly even more so now with so many people still working from home. Not only will you learn how to deal with the change management success rate, but get real time data before and after the change. Episode 1. How to find and fix Slow Endpoints Episode 2. That crashy app Episode 3. Dealing with annoying admin requests Episode 4. That Change Management Success Rate Struggle Why is change control important ? Help Desk International (HDI) referenced that 80% of incidents are caused by internal change. That’s a huge percentage. “80% of incidents are caused by internal change” If we could just control that better and get an idea of what the output would be like before we roll it out into production then we’d have less incidents and more time to do the job we we’re hired to do. Change Control Requests Change control usually starts with a change control request form for the desired change, in this example it’s for a global Zoom upgrade. Zoom is telecommunication software for holding meetings, and it became hugely popular during the ongoing Covid pandemic due to so many workers having to work from home. As new features are added, or security patches released, new versions need to be pushed out, and that all starts with a change control request. In Robs’ line of business (Rob Key, Senior Solutions Engineer at 1E), and some of the customers he talks to, it’s common to see them using the following methods for change control, either by sending the change to IT so they can test it on one or more machines, and then after doing that test, sending out a survey to the users involved asking how did that affect your machine, but depending on that change, IT might not dig in as deep as we’d like or using an UAT (user acceptance testing) group to look at it. Capturing pre-change data Let’s take a different approach using Tachyon Experience. Not only can we do monitoring but we can check health and compliance policies on a group of test machines to make sure that we can see that those machines stay healthy both before and after the change is completed. For that we’d want to capture pre-change health and compliance information. In this particular example there are two control groups, manufacturing and marketing. These are two different parts of the organization and they have different needs, so they should be good target groups for the data that we need. In the screenshot below we can dig down and see that services are healthy and all of the numbers are looking good. Next we can verify the version (in real-time) of the target software we intend to change, and below we can see it’s not yet upgraded. We can also see the services running, or in the example below, that a Zoom sharing Service is both stopped and disabled. It was disabled as a policy was created to not allow that service to run in the manufacturing group, for security reasons, to stop the release of important and confidential information. For the marketing group another policy was created to allow it to run. Post-change rules to guarantee state Any area of a business that goes down due to change management processes that go wrong costs that business money, so to avoid that, policies are created in Tachyon in Guaranteed State. You can see two policies in the drop down menu below, one for marketing, and one for the manufacturing group. Here’s a closeup of those policies. These policies are created using one or more rules in Tachyon Guaranteed State. This is post-change, and here we can see a rule from our policy targeting the marketing department, pay attention to the Not Applicable slice. Clicking on that reveals the following, and here we can see that there is a check to ensure that the Zoom sharing service is enabled, however this new version of Zoom doesn’t use this as Zoom changed the way they structure their software. So how were these Guaranteed State policies created? Each rule can check for various things, such as checking for free disk space or whether or not the Zoom Sharing Service is enabled or that the 1E Client service is in a correct state. Below you can see a list of some of those rules. If we take a closer look at a rule, in this case a rule to Ensure the DNS service is in a correct state, you can see from the screenshot below that the rule looks at optional Pre-conditions, Triggers, the Check itself and an optional Fix. What about non-compliance post-change ? Seeing real-time results that reveal non-compliance post-change is a great ability. That can be revealed by our Guaranteed State policies. To test this, killing a service which is checked for (one of the rules above) reveals this in real-time. Below a service is stopped… and reviewing the rules results, you can straight away see that there is non-compliance and drill down to find out more information. This is instantaneous, which means you can see how to control the change management process with ease by gathering data and responding effectively. “So how quick is quick ?” This really depends on what you are looking at, for example disk space might be polled every minute or 30 seconds. But when you are talking about registry changes or config file changes or services, that is real-time. Conclusion Change happens all the time in business and while most companies have their own change management processes to deal with that change, they are very likely contributing to their own workloads by the way they do it. Remember, internal changes that are not correctly monitored pre and post change can cause major problems. Using Tachyon Experience and Tachyon Guaranteed State gives your admins the power to see those results in real-time and allows them to easily tweak the change management process to increase their success rate. DISCLAIMER: The contents of this article are the opinion of the author and have been written from an impartial standpoint; however, 1E may have reimbursed the author for time and expenses for undertaking the findings and conclusions detailed in the article.
×
×
  • Create New...