Source Control - Custom or Repo?

Hey folks! We are tightening down on custom content internally and I am curious about source control / change validation on content we create (ITIL and whatnot). I have seen some mentions from Mr. @jgstew about checking into git their content, and it got me to thinking…

We currently use Azure DevOps, and I don’t mind making custom applications to do the lifting, but I don’t know that I will have time to really get into DevOps right off the jump… What about hashing the content? Dumping out certain fields, maybe excluding any DateTime fields, hashing those and putting them into a SQL table to run comparisons on and report against. I thought maybe just dumping this out and not messing with the Bigfix tables might be a less intrusive way of doing this…

Any thoughts? Recommendations? Lessons learned?

-J

I wanted to get to the same level, especially since we run all of our content development request through ADO, so if we get the contents of scripts/fixlets and the actual downloads in GIT repos we can just link any changes to the source control to the actual requests but couldn’t get it to work: Introduction to BigFix Download Plugins (Technical) - #4 by jgstew

It was probably me as I just spent about half a day on it and haven’t had the time to revisit it but I just couldn’t get the custom plugin from @JasonWalker to work with ADO authentication. If you have better success with it please share it, would be happy to see it in action.

the way I would probably recommend doing this is develop content in a dev/test root server in a custom site, have that site automatically backed up to GIT on a daily or hourly basis, then have another root server that takes the content from GIT and puts it into a custom site.

So you have a 1 way sync from DevRoot → GIT (export, git commit and push)

Then a 1 way sync from GIT → ProdRoot (git pull, import)

I have something written to handle the DevRoot → GIT part, but not so much the GIT → ProdRoot part.

That would work well for fixlets / tasks / analyses, but not baselines or handling the binaries for download. If you have binaries you want to distribute, you could put them on a webserver that is accessible to both the DevRoot and the ProdRoot and reference that in the URL for the binaries in the prefetches, so both roots get the binaries from the same place.

If you do this in such a way that it uses GIT command line to do the pull / push, then it doesn’t matter if you are using GitHub or BitBucket or AzureDevOps, it is all just GIT.

After some more digging, I think the custom content we’ve created is not as much of a concern to us as the custom applications we have had to write to do other things during execution - and we are addressing that. For our Bigfix content I think I am going to end up doing an export/daily archiving of our content and I found a nuget package to use to help with the Git push. Trying to keep it from having too many moving parts, and it is more just so we can monitor the changes. That is an interesting approach @jgstew for sure. TY.

1 Like

I usually just interact with GIT directly by having GIT installed on the system and then cloning a repo to a folder… my scripts interact with the folder as if it is just a folder, but then I use GIT to do the push.

You can use packages to interact with GIT in a more native way in PowerShell or Python, but that is often overkill for this use case.

This is exactly what I do, all custom content is pushed to GIT daily, which then gives me a changelog and diffs of what is changing in our content. Super helpful not just as a backup mechanism, but also as a changelog.

I basically use this to export all content: besapi/examples/export_all_sites.py at master · jgstew/besapi · GitHub

1 Like