A Death in the Desert

The often-repeated cliché “The devil you know and the devil you don’t know” is used for taking no action in the face of something new, keeping something you understand and foregoing something that has more uncertainty. The question of staying put or changing is a problem all CEOs and executives contend with.  In my youth a much older and wealthier businessman told me that when you are an innovator and you are first to the well, you get a long drink.  And he was right; he just left out the part that most innovators die in the desert before they reach the well.  On my second death in the desert (with apologies to Robert Browning) I realized that successful enterprises walk quickly over the corpses of innovators and hang around the well having a good drink and speaking admiringly of the recently deceased, whose agonizing path made it all possible.  So the market collectively is always smarter then any one individual innovator and all the guesses of all the participants will eventually show the way (if one exists) and others will copy.

Returning to our original problem of which devil – which we can show more concretely with an enterprise risk management example such as upgrade the enterprise software or stay on the current version – what is one to do?  The answer comes down to access to capital and level of pain.  This may not be so obvious if you have spent your entire career in a large enterprise and take capital for granted.  Innovation fails so often that if you can afford to wait then do so.  Some will say “innovate or die” but that truth does not have a time preference nor does it specify the degree of innovation to survive.   In most cases, it’s years longer than the salesman says and good enough is sufficient if the customer is happy.  How long did it take for the Internet bubble to burst under the weight of bad models?  How long was General Motors able to ignore its customers?  It’s always longer than you think.   Those vendor discounts will lower your perceived risks not your actual risk.  The insider clamoring loudest to be first may only want a bullet on his resume or to see his name in the case study.  The small company with little capital must be more aggressive and as statistics show will fail at a higher rate from which we can all learn.

But if, appealing thence, he cower, avouch
He is mere man, and in humility
Neither may know God nor mistake himself;
I point to the immediate consequence
And say, by such confession straight he falls
Into man’s place, a thing nor God nor beast,
Made to know that he can know and not more:
Lower than God who knows all and can all,
Higher than beasts which know and can so far
As each beast’s limit, perfect to an end,
Nor conscious that they know, nor craving more;
While man knows partly but conceives beside,
Creeps ever on from fancies to the fact,
And in this striving, this converting air
Into a solid he may grasp and use,
Finds progress, man’s distinctive mark alone,
Not God’s, and not the beasts’: God is, they are,
Man partly is and wholly hopes to be.

Excerpted from A Death in the Desert

by Robert Browing

The Discipline of Disciplines

Every one of us today is attempting to move from a less desirable state toward a more desirable one.  We are goal directed whether those goals are ignoble or not matters little.  The farmer, the criminal all seek after something more desirable.  This is true of individual behaviour and corporate action.  Along this path we face scarcity of resources and absolute limits on our time.  Eventually we die.  Because of the foregoing, we make daily trade offs in actions we take and don’t take and we approach our goals systematically.  The fool and the wise both have a process.

So when a business fails to address any number of risks are they behaving irrationally?  Any one with in depth knowledge in any field is tuned into the risks.  The Bundesnachrichtendienst knows more about the threat of terrorism to Berlin, than I.  The top security researchers know just how vulnerable our systems are to directed attack by a highly skilled person. Many security researchers carp endlessly that people “don’t get it”, that they are victims waiting to happen.  As a generalization this is true, but I believe the behaviour is not irrational.  The corporate result may appear irrational but the individual activity is not.  Most people access risks based on some degree of its locality and recent occurrence, for example, if you discovered that several of your neighbors had been mugged you would grow more cautious. There have been sufficient numbers of viruses and exploits that most people have some degree of caution now.  Inside corporations we have policies, practices, frameworks and protocols to address the threats but there always be residual risk.

When I found outstandingly bad practices inside a company, it used to be because of ignorance.  This is not a slur; they didn’t know or understand.  In the last five years it has become increasingly due to limited resources.  I believe this is progress.  The goal of security has always been to build reliable and dependable systems in the face of misadventure, malice, and error.   I would add a secondary goal and that would be to accomplish a better result over time at a lower cost*.

You cannot address all risks so you have to divide them into those that can be measured statistically, and probabilities assigned and those whose statistical profile is not known, for example, a terrorist attack or an earthquake.  For those with known probabilities and losses the approaches are well established. For those that occur infrequently with catastrophic consequences, the best we can do now is build resilient systems and set aside financial reserves.  Time and experience aid us in addressing risk;  we will never be fully protected, the purist and idealist in any field is a cynic in training; the discipline of disciplines is balance.

*This must be done even in the face of ossified regulatory burdens.

CIOs Dismiss Cloud Security Concerns

Today I read an article on the web about “CIOs dismissing Cloud Security Concerns”. I found the article quite irritating. Firstly because this has not been my experience in the field, where CIOs are very concerned about security even if simply from a compliance perspective and second because this is only one of a lot of articles I have seen in the latest months trying to downplay security concerns in cloud computing.
Clearly you can start to feel the frustration of many cloud computing providers/vendors about the slow adoption of their new vision of IT by most corporations. And is it really new? I remember, as an ex IBMer the ON DEMAND campaign almost 10 years ago, wasn’t that basically cloud computing? Well, wake up cloud computing providers and start knocking on the doors of IBM/EDS and all other providers of outsourcing services. They have been there before from a security and legal standpoint. They will tell you how painful it will be to convince large corporations to move their crown jewels to the cloud.
Instead of downplaying security concerns, why don’t you map a roadmap to your clients about how you will solve all of their concerns about cloud computing? Large corporations will always test you. You will get a little piece of action (The sandbox environments for example) and will have to prove yourself worthy. Then, if you were successful, you will get a larger part of the pie. What are therefore the security and compliance requirements of corporations that cloud computing providers will have to address in the next years? Here a short list:

  • Robust Access Control Capabilities: (Above all for providers like Google App Engine and Windows Azure)
  • Logging & monitoring
  • Audit trails
  • Long-term Archiving
  • Legal support for cross-national compliance issues (Ever wondered why big outsourcers like IBM/EDS have at least one outsourcing center in every country?)
  • SAS70 certifications and Security SLAs
  • Assurance that the BIG4 Audit companies are going to support this move: If you do not convince Deloitte/E&Y/PwC and KPMG you will be facing an uphill battle

So, dear cloud computing providers, get back to the drawing table and spend some money on these fundamental questions, instead of ridicolous surveys.

The Essence of SOX, J-SOX, EuroSOX and many more

While I was preparing a speach about the influence of national and international laws on the content and design of GRC tools, I did a research and tried to find all the laws and regulations that you need to follow when you run your business.

I started with SOX sections 404 (we all know them well), moved to J-SOX, EuroSOX and then to the national laws of Germany like Abgabenordnung (AO), which is part of the Handelsgesetzbuch (HGB)… it was quite interesting to read all those modern sections detailing out the traditional laws. Well, I continued and looked into the German data protection law “Bundesdatenschutzgesetz”, followed by some research on German corporate governance law KonTraG (Gesetz zur Kontrolle und Transparenz im Unternehmensbereich).

If you then start to look into specific regulations for industrial sectors, you will find many many more – a very prominent one is Basel II for the Banking sector, regulating which risks a Bank should take and which ones are too dangerous because it would burn the substance (equity quotes).

After browsing through German BSI IT-Grundschutz (equivalent to ISO27001), I thought it could be pretty simple… I had three letters in my mind – guess which ones´it could be?

Nothing changed over years and years when those laws were published and evolved. It is all about handling the complexity of more and more information and data which is handled in your IT systems.

In the end, the essence of all those laws is about establishing transparency by building up an Internal Control System. And here pops the Segregation of Duties concept in – a control system means that you need to establish SoD checks and no one should be able to have the power over a full business process. It is all about SoD – nothing more or less…

The Regulatory Tail and Risk Management Dog

There are a seemingly endless number of information security frameworks available to the practitioner, life cycles to aid in understanding, and policies derived from frameworks.  In many cases the implementation takes place but unless it is an audit item or there exists regulatory compliance requirement, there is very little enforcement.  I have witnessed this in every major corporation I worked with. In most cases this is because of limited knowledge, or limited number of personnel, which is driven by budget priorities.  Renewing the club membership for senior executives is more critical then adding one more person for policy compliance.  I write that without a hint of sarcasm.  If I have a senior marketing executive whose efforts are growing sales, it is money well spent.  Take that away and he may move to a competitor.  Sales and marketing are the apex of the pyramid, everything else is support.

Companies invariably treat various forms of regulatory compliance as a nuisance to be dealt with not as a risk management tool. The correct approach is to do risk management properly and the regulatory compliance will follow. This is well known.  It isn’t done on the whole, however, and many infomation security managers use regulation to push through better practices under the auspices of meeting compliance.  It is ineluctable that regulatory frameworks will become outdated as innovation occurs, and those companies whose regulatory tail wagged the risk managment dog will be more vulnerable than ever.

Thoughts on SAP Risk Management 3.0

“In the economy an act, a habit, an institution, a law, gives birth not only to an effect, but to a series of effects.  Of these effects, the first one only is immediate; it manifests itself simultaneously with its cause-it is seen.  The others unfold in succession- they are not seen; it is well for us if they are forseen.”

— Frédéric Bastiat

Bastiat was rumbling through my mind as I watched the SAP webinar GRC Partner Knowledge Session, “Process Control and Risk Management Enablement Session for Partners”. When it was over I had to look up the quote from his essay That Which is Seen and That Which is Unseen .  As the presenter  showed the risk management process:  Risk Planning –> Risk Identification –> Risk Analysis –> Risk Response –>Risk Monitoring and how the software allows you to execute this process,  what is seen are all the risk management controls available in the software, the compliance to regulatory risk, supply interruptions, all the obvious routine problems that happen with some regularity.  We can even model that risk with Monte Carlo simulation using four very limited distributions, discrete, continuous, lognormal and normal.   What is unseen is the cascade of events under way based on decisions made years ago.  The limits of our knowledge stare us in the face but knowing this is to be prepared.  We live most at risk when we feel comfort by engaging in the process, with misapplied statistical measures of uncertainty, Monte Carlo simulation using distributions more suited for modeling Roulette than real life business.  What is your exposure to rare events whose variance is not known?  We can imagine innumerable disasters but how much money will be spent to survive the rare unexpected event when the quarterly earnings report is just around the corner?  SAP  BuinsessObjects Risk Management 3.0 is fine software but not in the hands of the dilettante and the intellectually lazy.

Enterprise Role Management: Lost in the technical trap?

What exactly is Enterprise Role Management? It is a conceptual extension of the original RBAC model beyond a single system to a cross-system enterprise-level RBAC approach. Unfortunately, because of marketing issues everybody understands something different under Enterprise Role Management. For the sake of simplicity we will assume in this article that ERM is the practice of having a consistent set of procedures and methodologies around the definition of roles for all systems in a company. We will therefore only consider ERM software, those tools that help us define/construct and manage roles in adherence to these enterprise-wide procedures and methodologies.

Today, unfortunately most discussions around ERM start with aspects like Role Mining or Segregation of Duties. Role Mining is particularly insidious because its implementation is often based on a wrong assumption: You can mine actual roles and permissions in a company to find out what common roles and permissions people have and make a role out of it. Unfortunately statistical evidence of our work in 10 years of SAP security has shown us that more than 70-80% of permissions given to end-users are never used. They just piled up after years of operations. What value has data mining a base of information that is up-to 80% wrong? In those systems where it is possible to collect data over who is really using certain permissions (and there are very few systems that keep track of this) role mining may be  however a very helpful tool in re-engineering a role system.

But then why are most vendors just trying to sell through this angle? Because doing ERM the right way is a titanic amount of work and most companies and back-end systems are definitely not ready for prime-time. Let us forget for  a moment completely about software and just think about the business implications of Enterprise Role Management. We would like to define a business role and all associated required permissions. For example we would like to define an HR clerk role and everything that is needed for his daily work. In the analysis phase we start collecting the permissions that would be needed in databases, shared folders, ERP systems etc.. In the simplest case all backend systems have cleanly defined their roles & permissions and the ERM role would just be a bag of back-end roles. STOP. This is not ERM, this is a technical project with no business added value. It is just leaving the huge burden of defining the fine grained authorizations to the back-end system responsible. A real ERM project would contain most of following elements:

  • Company wide procedures and policies on the definition of roles across systems and inside a single system
  • A unified way of dealing with privacy laws and other compliance requirements for all systems (including cross-system segregation of duties analysis)
  • Clear business responsibilities on either process ownership or information ownership (depending on company organization) that tie into the definition of new roles or changes to roles

Only that way you can attack the main cost-drivers of authorization management. Authorization management is expensive because activities like defining requirements with the business or defining privacy requirements are done over and over and over each other year asking always the same questions for each system: who can access payroll? who can do payment runs? These questions need to be asked once for all systems in a process centric or information ownership centric way, stored in a wiki or some sort of other database system for consumption by all applications developers and the ERM people. In a recent SAP security project we have done, 60% of the time was spent on requirements gathering (5 months,4 people) and only 10-15% was on roles construction and maintenance, the rest was testing. It is therefore imperative for an ERM project to address the main driver of effort.

I think that the first vendor addressing this part right will be the winner of the ERM race. Until now the race is a technical one with little business value and most clients see clearly through the fog of marketing. You can have all XACML, SAML and other acronyms of the world on your product but if you fail in providing a bridge to connect business with IT the ERM promise will never materialize.

I am aware of being a little bit controversial in this piece, so if you disagree with the main points I would be glad to hear your opinions.

Security and Architecture Part III Going Wrong

At its most abstract, security concerns itself with the protection of a specified object, whether that object is immaterial like information or physical like a building.  Information security practitioners concern themselves more often than not with safe guarding communication of some form.    Typically there is a protocol designed whose goal is to assure one or all of the traditional CIA elements, Confidentiality, Integrity, or Availability.  Additionally, security is no different than any other engineering problem.  We are attempting to reach an ideal state while calculating the benefit offset by the cost and harm made by changes to the system.  There is no perfect state of security only relative harmony with uncertainty.

So where do things go wrong?  For every advantage a principle imparts there exist potential problems.  For example one of the benefits of modularity is information hiding.  It is quite possible that when we create a module over time the information contained therein and its details of operation fade from institutional knowledge due to shifting priorities and loss of personnel.   As system complexity grows there are interactions that were never anticipated that permit security breaches.  This happens in normal system evolution even in the absence of malfeasance or error.

Each of the six operators on modularity and category of changes they embody brings with them the potential for errors.  The first operator on modularity discussed was splitting in which one module becomes two.  Take for example, a system of a single level that is split into two levels, a high security level and a low security level.   In this case we need a constraint of confidentiality on the high level.  Information should not flow down (Bell-LaPadula model).    We prevent the lower system from reading the higher system and we prevent the higher system from  writing information to the lower system.  What happens if the lower system writes up to a file with the same name?  Once we make a change we need to re-think the security.

Substitution, where we replace one module for an improved one is the source of many problems.  One runs into regression errors where problems fixed in preceding generations are reintroduced.  Internet Explorer has had this happen several times.  Sometimes the substitution itself introduces new features that break the security of the module if not the entire system.

Augmentation, that is introducing new or duplicating existing modules creates problems also.  Sometimes for cost reasons fail over systems are less robust than the main system.  A system comes on line to replace the down system and cannot handle the load.  This can also be exploited by intentionally increasing the load on the redundant system.  Wait for an outage and then attack the backup.

With the exclusion operator, removing a system element can create a weaker system particularly if the removal of that system is driven by convenience or cost, for example, dropping two factor authentication.  Alternatively, removing a module may reduce its capability to respond to an environmental change.  We exclude a module for security reasons that decreases the flexibility of the system.

Inversion can go wrong due to its new global nature.  Where before any weaknesses were local and isolated down the system hierarchy now the system is overarching and can cause widespread damage.  Previously, I used identity management as a example of an system inversion.  Imagine a data synchronization event from a malicious administrator that changes everyone’s network ID.

Porting can wrong from a domain knowledge limitation.  Features of the system where the module was first created do not exist in the target system and a lack of understanding of this introduces errors.   A network module of an application on Linux is going to need different safeguards when it is ported to Windows.

Traditional design offers the information security architect principles and a way of thinking about securing systems.  Analyzing the systems in terms of modules, and the operators on those modules allows us to build flexible, robust systems.  But there are always trade-offs and one must examine the operators for deleterious second and third order effects.  They are there and you cannot possibly find them all but thinking about them in terms of modularity and the module operators may help you find more.

What I have tried to do in this short series is show a way of approaching security architecture in systems that draws on the knowledge gained over the years and embodied in principles; principles that are reflected in complex adaptive systems.  Through increasing modularity and flexibility secure systems  can be built at lower costs reducing their impact on the corporate profitability.

note: fixed typo and updated post.

Security Architecture and Design Part II

In Part I of this post I discussed generic design principles that apply directly to information security, viz. modularity and flexibility.  I discussed the six operators on modularity identified by Baldwin & Clark and in this post I will examine examples of each of these operators from engineering secure systems .  To review the operators were as follows:

  • splitting
  • substituting
  • augmenting
  • excluding
  • inverting
  • porting

The first thing that is typically done during redesign with a monolithic interdependent system is to split the modules.  Take for example a symmetric cryptographic system which produces a single key for encryption and decryption.  The symmetry can be broken by splitting the functions such that I have one key for encryption and one for decryption (public key, private key).  Although it wouldn’t make sense in this example, once we have split the function, development can continue in parallel with each new module following a separate evolutionary path.

The next operator, substituting, is complementary with splitting.  With substituting we replace an existing module with one that is “better” in some way whether lower cost, higher performance, for example having previously broken the symmetry of our encryption we are now able to optimize the encryption and decryption functions so long as they don’t lose their essential mathematical relatedness.  Following this innovation we replace the existing functions.  The operation of substitution normally occurs over the useful life of the system.

Further into the lifecycle the augmentation operator is used.  This is where a new module is added to an existing system.   For example, If only one person has the launch authentication code for a nuclear weapon and that person is eliminated, the country loses the ability to respond to a first strike. Therefore a new module is added, a backup system, where the authentication code is a shared secret among several parties who when they put their part of the secret together, can derive the authentication code.

The exclusion operator is the removal of a module.  We take something away to make it more secure and possibly lower cost.  A common example would be the removal of the application programming interface (API) prior to deploying an application to reduce the attack surface,  or the removal of unused  network protocols when hardening a system.

The fifth operator is inversion.  This typically happens later into the life cycle after many design iterations.  An example of inversion would be an identity management system where prior to its deployment each application managed its own identities.  Now there exists a single system which centralizes the function and eliminates redundancy.

Porting is the operator that people are most familiar with and there are numerous examples.  A common port is moving a security monitoring system, for example, a host based intrusion detection system from one operating system to another.  Porting occurs normally following the design being proven on one particular platform whether that proof occurred via testing or was proven in the market.

Those are the six modular operators with examples.  In the next post I will address things that can go wrong when using them.

Themes Change Required ..

Ok, I had to change the blog theme again. Hopefully for the last time. The problem with the previous theme was that it didin’t show the author of the post and since this blog evolved from a single author (Gregory Guglielmetti) to a multiple authors blog (Welcome on board to Gregg Dippold), I had two options: Modify the “fadtastic” theme we were using or take another theme that shows the author name. Being the lazy man I am I went the easy route.