Jump to content


lee_SCCM

Established Members
  • Posts

    10
  • Joined

  • Last visited

lee_SCCM's Achievements

Newbie

Newbie (1/14)

  • Week One Done Rare
  • One Month Later Rare
  • One Year In Rare

Recent Badges

0

Reputation

  1. I am moving my beta lab into RC now and starting to think a little more about design now for production. In the ConfigMgr 2007 world I have the following: A Central Site which houses SRS and SUP along with a couple of other roles, it is also the main source for original media for my software repository. I have a Primary Site which looks after all of my remote sites ( they have DP's, except 2 secondaries in slow link areas). As you can see a nice simple topology at the company im at currently :-) Moving into the 2012 world: I see the DP's becoming 2012 DP's, the Secondaries becoming DP's ( utilizing the throttling capability), my question is around the whole CAS/Primary. With my current design, the Central Site is used to as a single point of all changes that flows down, it does however have the capability to be brought up as a DP, if I lose my primary which would serve the site with the largest amount of clients ( London), the other boundaries can then easily be brought into the Central Site and clients will report into them. Now in the 2012, the CAS does not have the ability to accept client requests which poses the dilemma of how best to tackle this and still provide a good level of fault tolerence. So far I have devised the following: Most Overkill - 1 CAS ( as the 2007 central could do more), 2 Primaries ( simulating what I have now), and my 20+ DP's. Another Idea was, 1 CAS, 1 Primary and my 20+ DP's, the issue is how to have a safe guard in place fo if my primary goes down and the clients at them sites ( who dont have local DP's) to get content. Curious to know if anyone can shed any ideas on this, as the structure is small at the moment and I dont want to over complicate it. My only concern with having NO CAS at all is surely your cutting your ties straight away to ever have two primaries in the same site????? Appreciate any feedback Regards
  2. No, as I have in the log file, for example this TS there are two packages, in most cases it is either one or the other that fails. in some instances one of them is in the cache folder so it does not require to re-download but where it has to, it seems that it knows where to get it from but then the hash value is wrong so it bombs out. This in turn bombs out the sequence and the software is never applied. Im going to throw a spanner in the works so all my cards are on the table here. Now a product called NOMAD is used at what seems to be the BDP sites, not sure if your aware of it but its a peer-to-peer tool that 'from what it says on the tin' can help out get round the BDP's 10 connection maximum isue however my next stage of testing is to determine if in some way this could interfere at all???? Im told its certified to work with SCCM but just so you know the environment and you font go down an alley when i have not explained all the setup Thanks
  3. no as for example one BDP has a large chunk of clients that recieved the package fine and it was a success on the report. That site there are two BPD's those clients worked whereas some failed like the one in that particular log file. Why would some work and some not? makes no sense if the issue is the package and the hash vaule :S
  4. Ywah I have updated the DP several times but to no avail, making it a version 2 didnt fix it either. Why would this affect some not others, like i say when WOL is working i have around a 72% success rate ( aside from minor anomlies) just that is not good enough. only thing i have not done is tore it down from the DP let it refresh then push it back up again fresh, think it would make any difference. Also its soooooo sporadic, my london site it never happens on :S rattling my brains on this one!
  5. Peter, Thanks for your response, I am doing a european wide deployment multiple offices that all have seperate BDP's im please to have some re-assurance that it is not the binary potentially causing the issue. In terms of what they have in common from my group of machines that I have there are a few offices which have produced the same problem they would have perhaps 2 BDP's at that site or they would go to the european primary for the package I have enclosed 2 different screen shots of the Log ( with trace so its easier to see ) of the process it seems to go through as it bombs out. If there is anything specific you want me to check in common let me know Thanks Lee Log without BDP.bmpLogwitha_BDP.doc
  6. Hi, We are currently using Task Sequences in order to deploy software packages sometimes multiple sometimes simple packages. Our site setup means we have a CENtral server 3 primary and the rest made up of Secondary or BDP's. As the title suggests we seems to have this ever occuring issue where by ( for example my deployment today that prompted this) I am deploying Adobe and one other application to 400 machines last night. This morning I come into around 18% of these have failed and generally the error is similar to below: 05/10/2010 16:11:44 Accepted Program received 10002 12177032 05/10/2010 22:59:01 Running Program started 10005 12202192 05/10/2010 23:04:06 Succeeded Program completed with success 10008 12202596 05/10/2010 23:04:07 Waiting Waiting for content 10035 12202606 05/10/2010 23:04:29 Failed Program failed (download failed - content mismatch) 10057 12202849 06/10/2010 05:00:01 Expired Program expired 10019 12207830 As you can see highlights the download failed content-mismatch is my issue,. Now what is weird is that a large proportion work perfectly fine however 18% is to big a number to fail on a regular basis. I have searched the length and breath of the internet for answers and thet vary in how to potentially resolve it. I have tried updating the DP but that did not work, i have not turned of binary replication as its a BDP, im reluctant to do anything ultra extreme baring in mind that so mnay complete with no issues. In this TS there are two apps and on a few machines i have sampled the CAS.log file it seems to reference one or the other package that failed to downloard and sure as anything on that machine it is not in the cache folder. Anyone throw anylight on this be most apprecaited. Thanks in advance
  7. Hi, We are currently using Task Sequences in order to deploy software packages sometimes multiple sometimes simple packages. Our site setup means we have a CENtral server 3 primary and the rest made up of Secondary or BDP's. As the title suggests we seems to have this ever occuring issue where by ( for example my deployment today that prompted this) I am deploying Adobe and one other application to 400 machines last night. This morning I come into around 18% of these have failed and generally the error is similar to below: 05/10/2010 16:11:44 Accepted Program received 10002 12177032 05/10/2010 22:59:01 Running Program started 10005 12202192 05/10/2010 23:04:06 Succeeded Program completed with success 10008 12202596 05/10/2010 23:04:07 Waiting Waiting for content 10035 12202606 05/10/2010 23:04:29 Failed Program failed (download failed - content mismatch) 10057 12202849 06/10/2010 05:00:01 Expired Program expired 10019 12207830 As you can see highlights the download failed content-mismatch is my issue,. Now what is weird is that a large proportion work perfectly fine however 18% is to big a number to fail on a regular basis. I have searched the length and breath of the internet for answers and thet vary in how to potentially resolve it. I have tried updating the DP but that did not work, i have not turned of binary replication as its a BDP, im reluctant to do anything ultra extreme baring in mind that so mnay complete with no issues. In this TS there are two apps and on a few machines i have sampled the CAS.log file it seems to reference one or the other package that failed to downloard and sure as anything on that machine it is not in the cache folder. Anyone throw anylight on this be most apprecaited. Thanks in advance
  8. Thanks for the reply. Ok I have made some progress. I have created a state migration point. Constructed a task sequence that collects the USMT while in XP. this refers to an x86 package it seems to collect the data ok it then installs windows 7. then it runs through requestigin state store, but seems to the fail on restore user settings. am not sure why. I have made this USMT a x64 as its now in windows 7. is that correct? Thanks
  9. Thank-you for your response. Do you have any sites that describe the process, I have tried it a couple of times but it keeps failing all the time with the capture settings. Regards Lee Martin
  10. Hi all I am currently beginning to look at upgrading XP to Windows 7 in an enterprise environment. I have upgraded to SP2 and have successfully got my own image, answer file all done and can deploy no trouble. I am now looking into USMT and am looking for the best method in which to proceed. I am looking to take the settings/files etc from the machine, back them up somewhere and then install 7 and transfer data all back down again. I know about Hardlink but I would much prefer to start a clean slate with the machine on Windows 7 and make it as independant as possible without user interaction. Apprecaite any advice Thanks Lee
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.