When Security & Business IT Align

A short essay about the often misunderstood relationship between cybersecurity measures and IT solutions from a business perspective.

Christian
7 min readJan 30, 2023
Do business and security meet? (Image from public domain modified by Tuulka G.)

Often, I came upon the belief that the implementation of cybersecurity in an enterprise, is a threat to the business’s goals. It is, however, known and taught that this is not entirely correct. In fact, the business typically is and should be, the underlying force that drives cybersecurity measures. As a matter of course, the protection of business assets is in the utmost interest of businesses. The question of which measures are sufficient is a question of proportion. But the relationship is more complex than that.

An enterprise can be more or less risk-averse. The level of risk is determined by the particular business’ needs as well as the general enterprise strategy, which takes into account concerns of reputational damage. If you want to bear less risk, you most likely have to invest more money into cyber security. We think, therefore, usually that more security means concomitantly more costs, and moreover, we consider security measures generally as something that impedes development speed when it comes to releasing typical features of a software product.

I disagree with this generalization and the negative connotation of implementing cybersecurity measures with respect to the delivery of features of software products. Yes, existing enterprise security products are expensive (for example anti-virus software). Yes, writing or designing secure software is expensive*. Yes, adhering to security processes is time-consuming and costly. Yes, security personnel is hard/costly to find. But this is not the whole story of the relationship between cybersecurity and business IT solutions. Because in many perspectives the two align perfectly, as they share the same values: order, structure, documentation, and a culture of good communication.

(*) Well-designed software is *not* necessarily more expensive. Do you know the saying that goes: „software is expensive, quality software is less expensive“?

Software quality is not only an enabler for the development of business features on time and on budget but also a direct supporter of cybersecurity. For example, adhering to clean code principles results in a reduced likelihood of software bugs in general, which in turn assumably reduces the statistical odds of buffer overflow vulnerabilities (as the total number of bugs would be expected to be correlated with the total number of buffer overflow vulnerabilities).

Conversely, I believe, it holds that cybersecurity best practices can positively impact IT solutions and software quality in especial. Proper cybersecurity measures can, under circumstances, even accelerate the realization of (non-security related) features of a software product.

I have observed two categories of such instances where cybersecurity measures have a positive impact on an IT solution’s software product or support processes in practice: (1) a cybersecurity measure in place results in a more resource-efficient realization (or utilization) of an IT solution — such as fewer costs needed for the implementation of a feature — than when compared to without any cybersecurity measure having been put in place, (2) an enhanced cybersecurity measure put in place results in a more resource-efficient realization (or utilization) of an IT solution than compared to, if only a minimal, basic security measure was put in place. In the following sections, I will try to argue that claims (1) and (2) do make sense.

I distinguish between cases (1) and (2) as their reasoning is not the same. Case (1) is a stronger claim than (2); case (1) is a question of a world with cybersecurity versus a world without any cybersecurity. Case (2) assumes security measures are already implemented, and the question is only of *how* much and *which* measures are to be put in place.

Example case (1)

I have once worked in a very large organization that had many of its software applications run in an OpenShift cloud (Paas). When you develop your applications on a cloud such as OpenShift, you need to customize your cloud to your needs. This typically involves writing deployment configurations for your applications as well as (virtual) network settings or other basic settings such as maximum computing resources dedicated and fail-over redundancies among the least. As one might imagine, for larger organizations such cloud configurations can become quite complex. In the organization I worked, the handling of those configurations became in fact so complex that nobody really knew how to set up the currently applied cloud customizations from scratch in a reasonable time, if ever needed. This meant in practice: whenever the cloud needed to be recreated – for whatever reason, the team would not have been ready and the customer would have noticed days of outage.

One day, a cloud migration from one cloud host provider to another cloud host provider was due. Someone needed to do the difficult job: adjust the clump of weirdly nested scripts to make them fit to the new cloud host provider (URLs etc.). The tasks included figuring out the order in which the scripts need to be run, figuring out if something missing that isn’t yet covered in the scripts, adding whatever is missing, and afterward testing if all still work on the new cloud (because nobody knew whether these unmaintained scripts were still reflecting the configurations of the currently running cloud infrastructure at all).

It was “not possible” to first refactor the clump of spaghetti and then set it all up as there was “no time to do it properly” according to the team — a typical excuse in projects where quality is not as important as it should be. Unsurprisingly, it took ages to migrate from one cloud platform to another one, because those cloud configurations were nested and twisted in an unimaginable untidy way. Inter alia, hard-coded dependencies were “sprinkled” all over various files and folders. (If you have hands-on experience in that field you will be able to fully understand what I’m talking about).

A few weeks later security governance started to demand that all software can be set up in any other cloud in a short time whenever needed (cold site). This is a typical example of disaster recovery readiness, which is a typical security requirement. Security governance wanted to improve disaster recovery readiness through the implementation of a fast and reliable cloud setup.

In order to fulfill the security-driven request, the team had to do the previously postponed refactoring of that clump of (spaghetti of) scripts. They fiddled out the messy hard-coded strings spread all over into one place and consolidated all scripts into a few structured ones all at the same location. In no time they were done with that priority task. Now, one command would set up everything wherever and whenever you wanted within minutes.
This was a very useful undertaking. After all, it helped the team also to create a new extra cloud specifically for development in minutes as well, so that they could speed up development by ensuring every developer had even exactly the same cloud replicated for certain tests, if needed. Consistency was ensured, testability was improved, and overall maintenance was reduced. Security risks were mitigated. It seemed like doing security governance a favor wasn’t a waste of time.

What the architect tried to push for over one year to no avail was finally put into reality within only a week after one single authoritative request from security governance.

Example case (2)

This example is about a common security design flaw. The flaw was that crucial secret keys were hard-coded in the source code instead of being loaded by the software application from an external file (or secret enclave). This resulted in a need to release a new software (source code) version every time the key had to be renewed, and worse, the same key was used everywhere the software ran. As the software was rolled out to various customers, they all had the same keys within their IT infrastructure. Not only was the impact on security and releasing negative, but also on customer support. As the same trusted key fingerprints were used, access to the software was always a hassle as the keys could not be shared with the customer – then the customer would have a key that also could access other customers’ devices.

Gluing key material to software sources is both bad in terms of impact on the release process as well as in terms of security aspects. Again, a security-by-design approach would have ensured a proper architecture supporting a good release process inherently. Ensuring that configuration, key material, and source code are separated is one of the basic best practices when building software. It’s in principle nothing else than the separation of concerns, keeping things that belong together in one place. It’s common sense. Security-best practices fully align with that.

In the above example, a security measure done properly, resulted in an improved solution with not only an enhanced security level but also an improved release process.

Closing Remarks

We should recall that „complexity is the enemy of security“ — as renowned cybersecurity authority Steve Gibson repeatedly emphasized, and that order and structure are a key to productive software development (for obvious reasons). Good architecture should be expected to inherently result in security-by-design, and vice versa, a solution with security-by-design in mind can be expected to underpin good software architecture in general.

It is said that cybersecurity is a process, not a product. By the way, software development is a process too. There is, however, a difference. Cybersecurity is, more or less, a top-down approach whereas classic software development is typically very much bottom-up —that’s my experience at least. Could it be that a competent body of authority is the advantage the cybersecurity approach has in some cases over (agile) software development? The practice of cybersecurity is professionalism, and can also be software’s biggest strength.

When spiderweb IT solutions are growing, you may consider giving security governance a buzz.

Addendum: I recommend the “Hierarchy Paradoxon” for German speaking readers: Das Hierarchie-Paradox — Gerrit und wie er die Welt sieht. (gerritbeine.com)

--

--

Christian

Fan of Medium where everyone can freely express themselves.