2021. What an interesting year. With the world turned upside down by a pandemic that seemingly had its sights set on...
Managed Tokenization – Why Solutions Need to be Administered Properly
by Randal Becker
Steve’s Log, 03:15am: Why am I being pinged for support… again. We just put in a new release and it worked perfectly – or so it appeared. If anything, the system performed better than the last release. I should have known. I’m going to go get coffee.
Steve’s Log, 03:30am: Coffee in, now I can actually see the message on my phone. One of the new fields that was added is not being tokenized and the CIO got an angry phone call. Again. And now I’m getting yelled at. What the ___ is going on? We’ve done this over and over without trouble. Why now?
Steve’s Log, 04:30am: What do you mean we missed a field? How could we miss a field? It’s there in our development system. It’s there in QA. It’s in the, oh so carefully written, turnover document… oh crap.
Steve’s Log, 04:45am: Typo. Someone made a bloody typo in the production configuration file. One character wrong, and all hell breaks loose. At least it’s an easy fix, once we get SUPER.SUPER unsealed to allow the fix to be applied.
The above story is fictional but based on all too often occurring events. So, what happened, why, and how can we avoid this sort of production problem in future. Simple. What’s the plan this time? The same thing we do every other day, for everything else we work on, use DevOps Best Practices.
The problem is that between human eyes and a keyboard lurks the source of virtually all production problems: blurry eyes, rushing the change, or more likely the changes are so complicated that errors are inevitable. Try talking someone through a series of OSS commands to fix a permissions issue on the phone some time and see how that goes. There is also no way to reliably verify that the changes are applied correctly unless another set of eyes is on the problem and really understands what is being changed. The human brain has blind spots and we often see what we expect to see rather than what is really there.
This means that complex configuration changes, including in advanced tokenization solutions, routing tables, Pathway server settings with loads of environment variables, are all highly error prone without a reliable means of manual verification.
This puts your production environment at risk every time you make a change.
NSGit tracks all changes and ensures correct deployments
NSGit to the rescue. The first thing we must toss is the turnover document with actual code and configuration changes in it. The only instructions that should be in the turnover document are the identifiers of what changes are being installed, not the changes themselves. The rest of the document should be how to verify that the changes are correct. NSGit tracks all changes and ensures correct deployments. Without manual intervention. And with a full audit trail. Doesn’t get much easier and faster than that.
Danger: git ahead
Like everything else in a properly audited world, changes need to come from an authoritative source. Today, this means somewhere on your private cloud where the approvals for your changes are recorded. Think GitHub Enterprise, BitBucket Server, or Gitlab, and not from the public cloud, please. You can also use an artifact repository if you want, but those don’t generally have the audit trail of the approvals, so not my personal preference. The flow is straight forward:
- Development makes the change and commits it to git.
- QA gets the change record and pulls it into the testing environment from git. If it works, great; if not, the change is bounced back to development. The only thing QA needs is the change request identifier, not the manual changes to be applied. They might also want the commit id just to be sure.
- Production gets the change record and pulls it into the production environment from git. Sure, you might have to bounce servers, but that’s not configuration. The verification is that the commit id is the same as in the change record and that when you ask git, it shows you that you are on the correct commit with no outstanding changes to be made. Fortunately, commit ids are so different from each other than it is nearly impossible to visually confuse two of them.
When we are talking about configurations for solutions as complex as Tokenization, Production would get the change from the same place, and the same content as QA. It would be up to QA to ensure the change is correct and approve it, but we have taken human hands away from making the changes themselves at the highest point of stress to an organization – the overnight install process. This rule equally applies to any other component or application running that has any kind of complexity, so I am not just calling out Tokenization solutions; this can equally apply to security and network management.
At any point in time, anyone can verify that the configuration is correct by asking git. It’s as simple as literally running git status.
In the Linux world, many configurations are now being managed using straight git. ITUGLIB does it for OSS and uses it to ensure that the sudoer definitions have not been changed.
On our platform, the configurations are typically stored in GUARDIAN. Same solution, though, just add NSGit to the mix. So, your installation is now literally nsgit pull and verification is nsgit status.
This ensures that the configurations are delivered and managed exactly as they are intended to be, so that neither the CIO nor Steve gets woken up for that 3am support situation.