A classic example of this is an AS/400 installed in the 1980s to run a single application, which has been upgraded to iSeries and may soon be upgraded to System i5. It may have been able to pass a security audit with flying colors in its early years, but because of many types of changes, it may no longer be secure.
When I started to plan this article, I thought back to the time when I was a programmer on the System 38 and the other "midrange" systems. It became very obvious that since then, I've changed (and I don't just mean my waistline and my hair color), my opinion of data security has changed, and the landscape has changed too.
We never intended to write systems that weren't secure. Far from it. We actually incorporated a number of security features that I know still apply to applications being developed today. However, security was not at the front of our minds when designing applications, choosing products, and rolling out those business systems. This is slowly changing, but it is changing.
In many organizations today, including some of the major software providers, security is now as much of the culture as dress-down Friday or the etiquette regarding the appropriate people to copy in on your emails. In some recent situations, major application releases have been delayed because of the findings of the security team, not just the Q&A team. A classic example of this is the stance taken by Mary Ann Davidson, Chief Security Officer of Oracle. One of the first things she did on joining Oracle was to get high-level commitment that Oracle was not going to pay lip service to security issues. It has not meant the end of security challenges from Oracle products, far from it, but the company is changing and setting a good example.
I think the best way to evaluate the current situation is to look back at where we were 10 or 20 years ago. We will see that, as with many aspects of our lives, the only constant is change.
Application Ownership
For example, who owned the data and the applications 10 or 20 years ago? Often, the conceptual ownership of an application was given to a business manager, who had a hands-on role in authorizing the users of the application. Sometimes, those application owners granted a higher level of authority to the users than was appropriate for their role in the business. But when questioned, they claimed that they had good reasons for the security decisions they made: The business process could not be interrupted because someone didn't have the right access. They may also have claimed that security was just another example of "corporate paranoia," and it was totally unnecessary because they had a high level of trust in all of their colleagues!
Move forward to today, into a landscape altered by those dreaded regulations. It is obvious that the high-level executives own the applications now. With the power of the Sarbanes-Oxley regulations insisting that the senior executives assert each year that they have checked and verified the processes, they have a vested interest in ensuring that the correct "who" gets to the appropriate "what."
Access by Menu
Another big change is how our users get to the data we hold. In the midrange space, the green-screen menu used to be king. Menus were the main tools used to limit people to the right areas of the system. Of course, some menu systems were better than others. I worked with a banking system in Europe in which the maintenance screen for a user's access to menu options was purely a table with 0s and 1s. Each element in the table represented a program available in the menu system. If a user had a 0 in a cell, that meant he had no right to see or use that option. If it was a 1, he could use it. That was a security administrator's maintenance nightmare, leading to many mistakes and misallocations.
Some menu systems are good, with easy-to-use maintenance procedures and good security features (they were not designed merely as a navigation tool). But that was the only real access method the security administrator had to worry about. When midrange systems started to utilize more TCP/IP methods and client front-ends, it was obvious that things were going to change, but not many people would have identified that legacy menu systems would be the least of our worries. Now there are so many ways to get to the data—client tools, browsers, PC utilities, etc. —that the security implementation must take a different approach.
As Pat Botz mentioned in his two-part article series, maybe the best way to address this challenge is to use an exclusionary access control model. The basis for this is that you need to start from a position of exclusion—that is, prevent everyone from reading or changing the data, no matter where they access from. As the business identifies legitimate access reasons, the authority must be opened up only for that group of users, for that type of access. Just because a user needs access to update the payroll file through the payroll programs does not mean she should be able to update the payroll file through Microsoft Access.
This is not a simple task to achieve because the security administrator must try to define a security implementation that will be usable in the future, independent of any new access methods that may be introduced. But who has a crystal ball to know what will be needed? The only safe way is to maintain an exclusionary model and to regularly review how new technologies affect the applications. You cannot rest on your laurels and say that the security model is complete.
Encryption
There has been a noticeable change in attitude toward encryption as it affects our organizations. Way back when, encryption was a technology used by only a small subset of the business community. In those days, there was a belief that the only data worth securing to the nth degree was financial transactions in transit.
Now, encryption has a place in most organizations. Whether it is for backups, outgoing emails and files, or individual data elements, encryption is important. Recent announcements show that encryption technologies are becoming more available and relevant for securing OS/400 and i5/OS. The need for more encryption is driven by perception as much as by regulatory compliance recommendations. The media is awash with situations in which data has been compromised, and everyone knows about identity theft and phishing techniques.
Another Problem: The Users
One pet peeve of mine is the belief that all identity theft could be stopped by better software technology. This is blatantly incorrect, as was shown in one recent case: In April 2005, IT users at a number of banks sold client data to someone representing a collection agency. One of the conspirators obtained lists of people who were sought for debt collection issues and turned that information over to the bank employees, who compared those names to their client lists. The bank employees were paid $10 for each account they returned back to the organizer of the fraud. The media jumped on this as another example of flawed IT systems. What they conveniently ignored is that these IT users were permitted to see the client data; they were permitted to see social security numbers, addresses, and phone numbers. The crime occurred when those legitimate bank users transcribed this information onto paper, took it outside the building, and sold it to the conspirators. It seems to me that the biggest problem here is that the users had no loyalty to the banks. They were prepared to sell their employers' secrets to make a quick buck. This is more of a business and social challenge than a technical one. A business cannot restrict all users from viewing data; otherwise, collecting the data would be pointless.
The Task of the Developer
Developers these days have a complex task ahead of them. As well as the normal instructions for design, standards, layout, and documentation, they will also receive instructions on the security policies that the application must adhere to. In days gone by, developers often believed that security was something done by the security administrator when the applications were promoted live. Security was rarely incorporated into the development and testing phases.
For years, security consultants have recommended that vendor applications be evaluated for security principles as well as business functionality. It was not unusual to find applications that worked reliably only when all users were given *ALLOBJ authority or some high-level group profile. Now, we are seeing applications that adhere to security principles because clients require securable systems.
Obviously, the current set of federal and industry regulations will not be the last. We can expect to see more-detailed compliance rules and stronger interpretations of those rules. At a COMMON presentation on the HIPAA regulations a few years ago, the speaker stunned the attending crowd with his interpretation of those regulations. Specifically, he revealed that HIPAA auditors could insist that companies track every instance of every time any user reads files critical to the privacy of personal data.
Let's analyze that requirement. How many software applications currently record an audit transaction for every viewing of a record? Most will audit changes, but very few audit the reads of the data. For those few applications that do have an audit trail for both changes and reads, how is that file secured? Can it be protected, archived, and interrogated? But more critically, what is the likelihood that the viewing of the data has occurred outside of the application? If there is a possibility that a user could access the data in other ways—such as through a file utility, SQL, an exit point, or a Web page—how will you audit that too?
Is it actually feasible to identify reads from all of those sources with total confidence? Maybe this was why there was such a gasp of horror at the COMMON presentation: The IT specialists there had no idea how they would achieve this requirement. Fortunately, we have seen an element of common sense among the regulatory auditors, especially where they recognize that it is not technically feasible to achieve what the regulations seem to be suggesting.
The pitfalls of defining good security for data management should be regarded not as doom and gloom, but as an opportunity for better applications. With common security sense, it is possible to implement applications that enforce the level of risk that is acceptable to your organization. In addition to the object authority challenges that I mentioned, there are also some design elements that can become part of your design process. For example, a financial accounting system I worked on years ago required the ability to process multiple divisions within a corporate infrastructure. The business users within that enterprise had some roles that were relevant to corporate and others that were relevant only to their own local business.
What we found was that sometimes, within the high-level programs, we needed to know who the user was and what section of the organization he came from. We chose to pass parameters to the program that included not only the user name, but also what application brought the user to this program (e.g., an AP clerk through an AP menu, or a payroll supervisor through the payroll system) and what level of permission the user was to be granted.
The result of this design was that we could re-use the same program, irrelevant of whether a senior AP supervisor or a basic payroll clerk accessed it. We could show different records (depending on what part of the business the user was allowed to see) and also prevent the display of certain data fields for the different users.
We decided to standardize our security parameters based upon a set of details about each user, even if we did not see an immediate need for those parameters in every program. It was interesting to see, as our application developed over time, how many times we were able to use those parameters to enforce our security infrastructure. This reinforces that a commitment to a strong security plan is of great benefit if it is carried through.
Build for the Future
Many of the non-compliance citations being issued during the current round of audits are due to mistakes that could have been avoided with better commitment to the security element of data management. By accepting that the landscape has changed and learning from these recommendations, we can build better business system implementations. We must learn so that we are less likely to make the same mistakes in the future.
In the past, we took too much for granted and made too many assumptions, but we did it in a much safer environment. Now, we are paying for those assumptions. Most regulations are general in their definition, leaving the auditors or consultants to apply their own interpretations. A lot of headaches can be saved if organizations spend time to make their own strong security commitments, rather than wait for an external authority to impose impractical suggestions.
However, do not be fooled! If you secure your system today, that's not the end of it! You may pass your next audit, but the next one will dig even deeper and investigate even more obscure areas. Guaranteed.
Martin Norman is Senior Systems Engineer for SafeStone Technologies, an IBM BP specializing in compliance and identity management. As one of the original developers of SafeStone's security portfolio, Martin has performed security audits and advised on installations for clients throughout the United States and Europe. Martin can be contacted at
LATEST COMMENTS
MC Press Online