Security and Architecture Part III Going Wrong

At its most abstract, security concerns itself with the protection of a specified object, whether that object is immaterial like information or physical like a building.  Information security practitioners concern themselves more often than not with safe guarding communication of some form.    Typically there is a protocol designed whose goal is to assure one or all of the traditional CIA elements, Confidentiality, Integrity, or Availability.  Additionally, security is no different than any other engineering problem.  We are attempting to reach an ideal state while calculating the benefit offset by the cost and harm made by changes to the system.  There is no perfect state of security only relative harmony with uncertainty.

So where do things go wrong?  For every advantage a principle imparts there exist potential problems.  For example one of the benefits of modularity is information hiding.  It is quite possible that when we create a module over time the information contained therein and its details of operation fade from institutional knowledge due to shifting priorities and loss of personnel.   As system complexity grows there are interactions that were never anticipated that permit security breaches.  This happens in normal system evolution even in the absence of malfeasance or error.

Each of the six operators on modularity and category of changes they embody brings with them the potential for errors.  The first operator on modularity discussed was splitting in which one module becomes two.  Take for example, a system of a single level that is split into two levels, a high security level and a low security level.   In this case we need a constraint of confidentiality on the high level.  Information should not flow down (Bell-LaPadula model).    We prevent the lower system from reading the higher system and we prevent the higher system from  writing information to the lower system.  What happens if the lower system writes up to a file with the same name?  Once we make a change we need to re-think the security.

Substitution, where we replace one module for an improved one is the source of many problems.  One runs into regression errors where problems fixed in preceding generations are reintroduced.  Internet Explorer has had this happen several times.  Sometimes the substitution itself introduces new features that break the security of the module if not the entire system.

Augmentation, that is introducing new or duplicating existing modules creates problems also.  Sometimes for cost reasons fail over systems are less robust than the main system.  A system comes on line to replace the down system and cannot handle the load.  This can also be exploited by intentionally increasing the load on the redundant system.  Wait for an outage and then attack the backup.

With the exclusion operator, removing a system element can create a weaker system particularly if the removal of that system is driven by convenience or cost, for example, dropping two factor authentication.  Alternatively, removing a module may reduce its capability to respond to an environmental change.  We exclude a module for security reasons that decreases the flexibility of the system.

Inversion can go wrong due to its new global nature.  Where before any weaknesses were local and isolated down the system hierarchy now the system is overarching and can cause widespread damage.  Previously, I used identity management as a example of an system inversion.  Imagine a data synchronization event from a malicious administrator that changes everyone’s network ID.

Porting can wrong from a domain knowledge limitation.  Features of the system where the module was first created do not exist in the target system and a lack of understanding of this introduces errors.   A network module of an application on Linux is going to need different safeguards when it is ported to Windows.

Traditional design offers the information security architect principles and a way of thinking about securing systems.  Analyzing the systems in terms of modules, and the operators on those modules allows us to build flexible, robust systems.  But there are always trade-offs and one must examine the operators for deleterious second and third order effects.  They are there and you cannot possibly find them all but thinking about them in terms of modularity and the module operators may help you find more.

What I have tried to do in this short series is show a way of approaching security architecture in systems that draws on the knowledge gained over the years and embodied in principles; principles that are reflected in complex adaptive systems.  Through increasing modularity and flexibility secure systems  can be built at lower costs reducing their impact on the corporate profitability.

note: fixed typo and updated post.


Security Architecture and Design Part II

In Part I of this post I discussed generic design principles that apply directly to information security, viz. modularity and flexibility.  I discussed the six operators on modularity identified by Baldwin & Clark and in this post I will examine examples of each of these operators from engineering secure systems .  To review the operators were as follows:

  • splitting
  • substituting
  • augmenting
  • excluding
  • inverting
  • porting

The first thing that is typically done during redesign with a monolithic interdependent system is to split the modules.  Take for example a symmetric cryptographic system which produces a single key for encryption and decryption.  The symmetry can be broken by splitting the functions such that I have one key for encryption and one for decryption (public key, private key).  Although it wouldn’t make sense in this example, once we have split the function, development can continue in parallel with each new module following a separate evolutionary path.

The next operator, substituting, is complementary with splitting.  With substituting we replace an existing module with one that is “better” in some way whether lower cost, higher performance, for example having previously broken the symmetry of our encryption we are now able to optimize the encryption and decryption functions so long as they don’t lose their essential mathematical relatedness.  Following this innovation we replace the existing functions.  The operation of substitution normally occurs over the useful life of the system.

Further into the lifecycle the augmentation operator is used.  This is where a new module is added to an existing system.   For example, If only one person has the launch authentication code for a nuclear weapon and that person is eliminated, the country loses the ability to respond to a first strike. Therefore a new module is added, a backup system, where the authentication code is a shared secret among several parties who when they put their part of the secret together, can derive the authentication code.

The exclusion operator is the removal of a module.  We take something away to make it more secure and possibly lower cost.  A common example would be the removal of the application programming interface (API) prior to deploying an application to reduce the attack surface,  or the removal of unused  network protocols when hardening a system.

The fifth operator is inversion.  This typically happens later into the life cycle after many design iterations.  An example of inversion would be an identity management system where prior to its deployment each application managed its own identities.  Now there exists a single system which centralizes the function and eliminates redundancy.

Porting is the operator that people are most familiar with and there are numerous examples.  A common port is moving a security monitoring system, for example, a host based intrusion detection system from one operating system to another.  Porting occurs normally following the design being proven on one particular platform whether that proof occurred via testing or was proven in the market.

Those are the six modular operators with examples.  In the next post I will address things that can go wrong when using them.

Themes Change Required ..

Ok, I had to change the blog theme again. Hopefully for the last time. The problem with the previous theme was that it didin’t show the author of the post and since this blog evolved from a single author (Gregory Guglielmetti) to a multiple authors blog (Welcome on board to Gregg Dippold), I had two options: Modify the “fadtastic” theme we were using or take another theme that shows the author name. Being the lazy man I am I went the easy route.

Security Architecture and Design

A while back a query was made on a mailing list (by James McGovern I believe) what questions do you ask architects in job interviews and my response was, without being facetious, “What is the purpose of an architect?”  The answer is straight forward but many people will give you a circuitous route through the function of design without ever giving the simple answer, to wit, the purpose of an architect is to make decisions which once made are permanent and therefore expensive to change.   Long term one design goal should be to  reduce the number of decisions which cannot be reversed.  In this first post I will look exclusively at design principles and in a later post apply this to a system.

What principles are available to the security architect when securing a system, that is, principles from design not security?  How should one convert those things which are permanent into flexible objects or at least increase their range of expression? When asking these kinds of questions we can examine how they are answered in nature.  Consider, for example, that the alphabet of deoxyribonucleic acid is limited but the information it carries is a blueprint for all advanced life on the planet. We know from the study of complex adaptive systems that modularity and flexibility (real options) are foundational approaches to dealing with complexity and increasing the ability to survive shocks.  Therefore, when first approaching the design of a system, our primary principle is modularity; modular in the sense that the parameters and design choices within the module are interdependent and those external to the module are independent.  More generally, there is information which is hidden and information which is seen.

Modularity gives us specific advantages:  it allows us to contain patterns within an object whose encapsulated parameter range abstracts the system complexity making it manageable; it allows for concurrency of design within the system by breaking unnecessary interdependencies and it increases flexibility and improves adaptability both at the time of design and later as things change.  Because it can accommodate future uncertainty, it reduces risk, the primary goal of security.  Absent modularity, one has two elements, the system design and the system design tasks.  As the complexity increases the understanding drops and as experience has shown complexity is the enemy of security.

Beyond the abstractness of this principle what kinds of operations can be conducted on modules that will improve them over time? Baldwin & Clark in their book Design Rules, Vol. 1: The Power of Modularity posit six operators on modules.  Anyone who has designed anything has most likely absorbed these practices from trial and error practice but it pays to state them explicitly.  They are as follows:  splitting, substituting, augmenting, excluding, inverting, and porting.
These operators work in combinations, for example, some features are excluded at the start of the system and augmented after go live.  Additionally, these operators are employed as part of a directed iterative design process which seeks continual improvements in efficiencies and costs.  Most innovations are process and design innovations not revolutionary breakthroughs.

Judicious use of these operators is also necessary.  Misuse can lead to a riskier system.  For example, splitting can lead to loss of functionality, substituting (normally to lower transaction costs) can lead to poor performance, augmenting can lead to useless feature creep, excluding key elements can make the system more vulnerable to attack, inverting can create single points of failure and porting can lead to system failures because the module takes on work they are poorly designed for, again driven by the need to lower transaction costs

These basic principles: modularity and  flexibility (lack of permanence) taken  from generic design elements should inform the work performed by security architects.  In the next post I will examine these principles at work in security engineering.

Musings on Segregation of Duties: Are your auditors NP-complete? (Part 2)

In the first part of this post we looked at how finding a minimal set of people satisfying a list of SOD constraints is equivalent to finding the chromatic number of a graph, which is NP-complete. We then took a list of possible SOD conflicts and showed a simple transformation to a graph form which can then be further analyzed with standard mathematical programs like Matlab or Mathematica.
But these were toy examples. I therefore take a real example by doing an analysis of the standard rulebook delivered with SAP BusinessObjects Access Controls 5.3. When cleaned up from degenerate cases this rulebook contains 183 risks. Transforming this rulebook into a graph structure (GraphML) and visualizing it with the excellent yED program yields following structure (see image). Attention: The colors in this image are just for aesthetics, they do not correspond to the vertex coloring we will apply later.

SOD Graph of SAP Business Objects Access Control 5.3

SOD Graph of SAP Business Objects Access Control 5.3

I spoke in the last post about a randomized algorithm. Unfortunately I cannot find it anymore, so we will use instead a greedy heuristic algorithm for vertex coloring by Brelaz. Running this algorithm loose on our data gives a results of 5. We theoretically just need 5 people to comply with all the Segregation of Duties constraints.

Amazing isn’t it? Imagine all the discussions we had with clients and auditors along the lines of: ” We would need 100s of people to comply with all SOD constraints”. And 5 isn’t an optimal solution, it is just a heuristic solution we found in a short timeframe. Maybe there is a solution with 4, but it would take eventually too much time to find it.

So, if it takes only 5 people, then also small branches could implement it? Well, it is not so easy. We will look at some of the solutions and realize that while theoretically possible, it doesn’t make sometimes sense from an organizational theory perspective. Imagine putting an ad on a newspaper for following skills: “Looking for excellent candidate with know-how in SAP Development, Month-end closing, Accounts Receivables clerk experience”. Again, this dreaded gap between the theory and the doable!

But out of curiosity lets take a look at 2 areas from the analysis: Procure-To-Pay (P2P) and Order-To-Cash (O2C). In the attached images I have listed the different functional groups and we have given a color. In O2C for example we would need at least 4 people to satisfy all SOD constraints. In P2P we would need at least 5.

O2C: Order-To-Cash

O2C: Order-To-Cash

P2P: Procure-To-Pay

P2P: Procure-To-Pay

Other things to consider:

  • We ran the algorithm globally on the SOD list, but you could also be interested only on running it inside Finance or Procure-To-Pay. That might yeld a globally less optimal solution but might be more suitable to how companies are traditionally organized.
  • Stability of the solution: if the SOD list gets changed frequently then this could completely change the results of the analysis. That means that by just changing 2 or 3 SOD constraints we could have a completely different grouping of the functions. Fortunately sod lists do not change so frequently.
  • Organizational constraints 1: companies always need “backups”. Someone is missing or sick or on travel. Work has to continue and this could easily double the minimum number of people required.
  • Organizational constraints 2: Languages.. problem is often found in large Shared Service Centers. If serving multiple countries with diverse languages, SSC tend to organize as much around countries than processes. This creates a second dimension of complexity and can easily undermine SOD requirements out of practical needs.

Now you have a number of discussion points for the next round with your auditors on Segregation of Duties. You can:

  • Mathematically show that your small branches may not have enough people to comply with their requirements. And therefore there are no other possibilities than use mitigating controls.
  • That parts of the SOD discussion is, even in its static and simplified form, NP -complete. And considering that aour organization is a living entity with ever-changing processes and organization, it is even more complex.
  • Analyze your SOD list for some insights about how you split up the responsibilities inside processes.

In a next post I will talk about a possible usage of these ideas at small companies, called ColorSOD™.

SAP Opportunity Discovery Camp 09 – Interlaken, Switzerland

I will be 2 days in Interlaken for the SAP 2009 Discovery Camp for partner companies. Interesting program this afternoon with:

  • SAP Business Objects Portfolio: Strategy and Execution
  • Legal Requirements with GRC
  • IDM & GRC for CIOs
  • Company wide Data Warehouse

Good to catch up with colleauges from partner companies like Cirrus and old Deloitte and PwC beer drinking champions (you know who you are).

I found the first presentation particulary interesting. GRC is getting integrated in the Business Objects portfolio together with Business Intelligence, Strategic Enterprise Management and Information Management. Seems like the now 3 years old efforts to combine GRC and EPM initiatives is getting some traction from the software side too.

Musings on Segregation of Duties: Are your auditors NP-complete? (Part 1)

Segregation of Duties, also called Separation of duties (SoD) has been in the headlight of public accounting firms since the beginnings of the Sarbanes-Oxley regulations, specifically the 404 section. Wikipedia describes it as: “the concept of having more than one person required to complete a task. It is alternatively called segregation of duties or, in the political realm, separation of powers”.

Translated to application security this means that no person should have a set of authorizations that enables them to perform two incompatible tasks of a process. The easiest example is related to the procurement process: The person creating a Purchase Order should not be the same doing the Goods Receipt and another one should be doing the Invoice Receipt. Depending on the size of the company the process could be further splitted into who is doing the Invoice Receipt and the one doing the Payment run.

Such infamous lists of SODs are normally given to companies by their auditors and they are all very similar (From my experience working at PwC and Deloitte). Companies or the auditors then run some automatic programs to analyze the state of authorizations in certain critical systems such as SAP in comparison to these SOD requirements. They normally get “sexy” excel sheets lists with ten thousands of conflicts, and start cleaning up in an iterative process until the auditor is happy.

In 2003 however I was confronted by a client with a completely different approach. We were in a big-bang project that was completely reimplementing the client business models, processes, organization and implementing a new centralized SAP system. I was in charge of the Process and Security team which was covering SOX 404 compliance, the implementation of a new internal control framework and sap security. The head of the shared services department came to me:

“Greg, I can setup the shared service center to be segregation of duties clean from day 1 of operations. I have received this list from the auditor, how many groups would I need for implementing all SOD rules?”

You have to imagine those lists as being like 300-400 lines long with entries like this:

(PR-1) Purchase Orders # (PR-2) Goods Receipt
(PR-3) Invoice Receipt # (PR-2) Goods Receipt
(MD-1) Bank Master Data # (PR-1) Purchase Orders

over all areas of business, from finance to sales. What the client was asking me was: Given this list of 300-400 rules, what is the minimum number of people we would need to avoid all segregation of duties problems? (For the expert: I am simplifying the problem here, clearly you would try to use mitigating controls in real life and not solve everything through organizational changes).

After some thinking about the problem I was quite happy to have been attentive during the theoretical computer science lessons at university. Turns out that what the client was asking me was in fact equivalent to a well known problem called “Vertex Coloring”. Unfortunately the problem is one of the so-called NP-complete problems. Finding th smallest group of people satisfying all SOD constraints is equivalent to finding the smallest number of colors needed to color a graph G: this is called its chromatic number. How do we transform one problem into the other one? Turns out it is quite straightforward: Imagine tasks like “creating purchase orders” as vertices of graphs and when 2 tasks are in conflict they are connected by an edge. Based on the previous list, we have a graph representation (See Image)

Finding the chromatic number of a graph

Finding the chromatic number of a graph

With only those 3 SOD rules to satisfy we would just need 2 groups of people. For small graphs it is quite easy to find an optimal solution, but for graphs with 50 or 60 vertices and 300-400 edges the complexity explodes. I had a special program (Mathematica) run a whole night without finishing. Fortunately there are so called randomized algorithms coming to our rescue. They can find approximate solutions in a short time frame. They do not guarantee to find the optimal solution, but can often guarantee for example that the found solution is not more than twice as worse as the optimal solution. So I ran the randomized program and waited for a solution.

Part 2 of the blog in the next days …