A “Zero Trust” Model for Security - INFORMATION SECURITY AND RISK MANAGEMENT - Information Security Management Handbook, Sixth Edition (2012)

Information Security Management Handbook, Sixth Edition (2012)

DOMAIN 3: INFORMATION SECURITY AND RISK MANAGEMENT

Employment Policies and Practices

Chapter 14. A “Zero Trust” Model for Security

Ken Shaurette and Thomas J. Schleppenbach

When Was the Last Time You Changed Your Social Security Number?

This is an interesting question. Currently, for an individual living and working in the United States, a social security number (SSN) is the only thing that will truly stay with you “Until Death Do You Part.” No matter how clever we get, how many times we have moved, or how many controls are in place to protect it, the tie between an individual and their SSN leaves us vulnerable to those trying to steal identities for their personal betterment.

There are three words or concepts we would like to expand on relating to protecting ourselves against those malicious people whoever they may be, trust, risk, and data privacy. We will elaborate all the ideas to support the concept of zero trust.

Trust

Definition of Trust:

A charge or duty imposed in faith or confidence or as a condition of some relationship

Something committed or entrusted to the one to be used or cared for in the interest of another

On the character, ability, strength, or truth of someone or something

One in which confidence is placed

A good dictionary (like the really big thick ones at the library) will usually tell roughly when a word was introduced.

The origin of “trust” can be traced back to Middle English, probably of Scandinavian origin; akin to Old Norse “traust,” trust; akin to Old English “trēowe,” faithful where the first known use was in the thirteenth century.

So the word “trust” has been in our vocabulary for some time.

We as Americans seem to be trusting in nature. We go about our lives living the dream, working hard, getting an education, buying a house or a car, and spending time with our families. To do any of these things, we must expose ourselves to risk and are vulnerable to those who prey on the unsuspecting.

Let us look at three simple questions affecting anyone over the age of 18.

Did You Go to College? Did You Take Out a Loan? Do You Have a Credit Card?

If you can answer yes to any of these questions, what are the chances that personal information has been compromised and is stored on some geeks’ hard drive just waiting to be sold or used in some other potentially malicious manner.

The education industry in the United States has failed millions of students in the early years of the electronic age, and the Internet as it relates to Data Privacy has continued that trend with an open network philosophy. As of 2005, higher education and even elementary education provided access to the Internet with limited controls in place and the level of information sharing was expected to be open as a right or freedom from restrictions to education.

For people who attended higher education years ago, the student ID was an individual’s SSN. Even if the ID was something other than the SSN, a number of systems storing millions of student records had weak access controls, thus making the information available to authorized and unauthorized people, especially hackers with little effort.

The banking industry, e.g., has been a target since the dawn of time from bank robbers to now those attempting to compromise individual accounts to move and steal money.

Anyone who has taken out loans or who have held credit cards is in a similar predicament of sharing private information to a entity in which we have to place a significant amount of trust. Consider the number of years that credit cards existed before regulatory requirements such as those put in place by VISA and the Payment Card Industry (PCI) when they first implemented the Data Security Standard’s (DSS’s), often affectionately named, 12-step program.

Before improved controls became a regulated requirement, just think of all of the electronic records with confidential data that were being stored in systems. Consider over the last 60 years all of the mergers and acquisitions, new systems, redesigns, and conversions; system development life cycles where developers made complete copies of a database; importing the data into test systems, with minimal controls, to be used for system testing and so on. Is there still the potential opportunity for someone to obtain those millions and billions of records on some old database? We wonder how great the odds are against us to have been missed from those database, especially those of us from the baby boomer age group.

Hackers, crackers, smackers, or snapperheads (one of my favorite words) have been gathering data for many years before and after the inception of the Internet. Accordingly, many systems and databases were compromised and information was downloaded and stored long before controls were in place like we see today.

For how long have we given out our personal confidential information freely; simply trusting that the information would be handled safely by the organizations we had to deal with in our daily lives and those that we have worked for over time.

Risk

Are there risks in trusting? Of course, we see this all the time. In the movies we always hear, “but I trusted you,” betrayal seems to be commonplace to thicken the plot of any good movie or story. Unfortunately, this is also true in real life and business. Maybe the betrayal is not so much direct betrayal, but comes more from an inability of organizations to implement enough controls compared to the number of attackers looking for a way to gain access. Organizational challenges included difficulty outlining the risks so that informed decisions could be made to avoid a potential hazard to ourselves or the business.

So another word that ties directly into trust is risk. Trust is something that has to be established over a period of time; however, it is still necessary in the mean time until trust can be established to continue conducting business. As a result, to form an initial level of trust, we must go through a process of weighing and assessing risk. Consider the risks that are involved in trusting. They often are somewhat contrary to the true definition of trust, but that is where we are at in our society today.

Definition of Risk:

To expose to hazard or danger

To incur the risk or danger

Possibility of loss or injury

The degree of probability of a loss or peril

The word “risk” has also been in our vocabulary for a long time. The first known use of the word “risk” was in the mid-1600s. From one Webster resource site, it identifies the first known usage of “risk” to be in 1687.

Gregory H. Duckert, a noted author on risk management, states in his book titled Practical Enterprise Risk Management—A Business Process Approach, that common sense is the best friend of managing risk. And we know just how much management decisions are based solely on common sense.

To illustrate our point, let us use a real-life story to characterize what happens when a common sense approach to assessing and managing risk is not applied.

The story begins on a sunny fall day in late September in Wisconsin. Our author was in the northern part of the state, relaxing on a weekend, participating in the popular fall Wisconsin sport known as “hunting.” Specifically, he was walking the woods for the ruffed grouse (aka partridge in an oak tree) and sitting in a tree stand with bow and arrow for hunting deer.

After spending the morning sitting with the bow and arrows on a tree, our author decided to take his young yellow Lab for a walk to see if the dog would be able to flush a few grouse. The grouse population had been in one of its down years, but the walk would be good exercise and provide a chance to see if the dog training was working.

Leaving the house headed to the favorite grouse trail, Dad asked where I would be. I told him and he simply stated “Now we have had a lot of rain and where you’re going the roads can get a little tricky!” I said; “Okay, okay, yeah, yeah” and headed down the road.

As I reached the favorite trail, I was faced with a somewhat large puddle in the middle of the road. Now I usually parked just around the bend from the puddle, which was only a short distance farther. Looking back, I should have been hearing Dad’s words ringing clearly in my ears and done a better job of assessing the risk, but that is all in hindsight and we all know the saying, hindsight is 20–20. I should have probably backed up and pulled off the road and started the afternoons hunt from there, but noooo.

Now let us take a closer look at the risks that were involved in the decision that was about to made here.

Risk: The main goal is to return back to the cabin safely.

Impact: The impact ranges widely; from missing out on one of Mom’s home-cooked meal, to ending up in the hospital and damage to the car.

Likelihood: High

Risk: Car gets stuck out in the woods.

Impact: Spending the night in the woods and potentially ending up on an episode of “I shouldn’t be alive”

Impact: A long walk to the nearest main road and the cost of a tow truck

Impact: Severe damage to the car, resulting in even more expenses for repair

Likelihood: High

Everyone performs mini risk assessments all the time as we make decisions in our daily life. Simply walking out of the house, we assess the risk of getting hit by a car as we cross the road, or the risk of other bad things happening if we decide to go out to eat in that bad section of town with the great restaurant. We weigh the impact of what could happen and the likelihood that it will happen to us. Experience plays a very important part in determining how successful or unsuccessful we will be at assessing risk. Surely, the author’s risk assessment decision would have been different now, if he could use hindsight, the experience that he now has available to better rate the risk.

The puddle did not look too bad, so our author just took the high side of the logging trail and made it past the puddle without incident. Guess the risk was not that great, but did we gauge the fact that we have to come back that same way later today?

Trail walking for grouse with the trusty yellow Lab doing what she was supposed to do continued without incident. After walking about three miles without seeing a grouse; it was time to head back to the car. It was now early afternoon and there was still time to get to the car, grab the bowing equipment, and get back out to the deer stand for the evening bow hunt.

We got back to the car, just a little tired but the exercise felt good, and it was still a very nice day to be outdoors. I cased up my gun, jumped into the car, and turned the car around to head back out of the woods. There it was the same large puddle, same place no change, but is the risk the same? With the first experience being successful, my experience would tend to lower the risk so there I was taking the high side of the puddle; however, this time I was on the other side of the puddle, the front tires spun a little and the entire vehicle slipped right into the deepest part of the puddle. This puddle turned out to be a bit more than a just a large puddle in the woods. Now that the car is stuck, it seems more like a small lake and darned if the risk of going past the puddle without getting stuck does not seem really high now.

The whole nose of the vehicle just beyond the bumper was now deep in water. Unfortunately, I was driving a Pontiac Grand Am and not a large four-wheel drive truck or modern SUV. I got out of the car and found myself standing in water just below my knees. Yuch, this was one very deep and slippery mud puddle. In northern Wisconsin, there are two basic types of soil; mostly you will find sandy soil, but there are some areas where the soil type consists of heavy sand and clay mix. Well, how lucky could I be that this was one of those sandy clay mixtures. I guess that was never included in my risk assessment; the likelihood that the less desirable soil would be here in this place of all places.

I probably should have spent more time gathering risk data to include in my assessment before making my decision to proceed past the puddle. Such as:

How deep is the puddle?

What is the soil type: clay or sand?

What is the probability of getting stuck?

How much would be the impact of slipping into the deepest part of the puddle?

Is the vehicle right to minimize the potential of going into the puddle and getting stuck?

Needless to say, I was without doing a very good risk assessment so I ended up very stuck. I took the tire jack out of the trunk and attempted to jack the car up. Next, I found some sticks, logs, and whatever I could that was solid and placed them under the front tires in an attempt to build a foundation so the car could get traction. Then I tried to drive forward out of the puddle. This process was repeated … well … let us just say I tried more times than I can actually remember.

It was now starting to get late and I had about 45 minutes of light left in the day. Covered head to toe in mud, my blue Pontiac was now brown, very tired and physically exhausted, I was about 10 miles from the nearest blacktop road and I had no cell phone to call for help. In the back of my mind now I could hear my father saying “Now we have had a lot of rain and where you’re going the roads can get a little tricky.” Why did not I take that into account in my risk assessment?

I walked up the road a short distance and sat down to think things over a little bit. As I was sitting there, to add insult to injury, a grouse strutted across the dirt road. I could hardly believe that my poor assessment process had put myself into this predicament. All I needed to do was park a mere 50 yards short from where I usually park and I would have avoided the puddle altogether.

Well I got a second wind and thought I could give the whole jacking up the car process another try, but once again it failed. I definitely needed a tow truck. I called the dog and we started down the dirt road toward civilization. I walked about 100 feet before I heard a vehicle or vehicles driving down a road some distance away. What luck, I thought I had wasted it all. They were getting closer and closer. Then here come four ATVs turning the corner and heading my way. It was a small group of guys out for a fall ride.

They pulled up to me and my sunken vehicle and one of them yelled out “Wooh-ho gettin’ awfully brave with that Pontiac, aren’t ya?” Swallowing my pride, as I was not in much of a position to say anything to him in rebuttal, I asked if they could possibly pull me out. Luckily, they had some chains with them and they hooked up two of the ATVs, with me behind the wheel driving the car, we slowly pulled the Pontiac out.

I thanked them and stated that if I ever ran into them in the bar I would buy them a beer. It was just getting dark as I pulled into the driveway at the cabin. My father stepped out, looked at me and the car, just smiled ear-to-ear, shook his head, and said “It looks like you got stuck.” I did not say too much, as it was obvious I should have listened and heeded his warning, so I just took off my wet muddy clothes, took a hot shower, and called it a day.

What is that idiom? “Never trust to luck!” Gather data before making a critical decision in a risk assessment. Well, that day I got lucky. Oh FYI … luck should never be a part of your risk assessment process.

Obviously, I could have saved a lot of trouble by using a little common sense and considered the risk of going around that puddle. If I had, the whole experience could have been a pleasure memory about an enjoyable walk in the woods at the cabin.

Surprisingly, businesses make decisions every day contrary to common sense. In doing so, they end up in systemic failure and the potential risk is that the organization could collapse. Failing organizations might end up making massive cuts in work force just to stay in business or needing to completely close their doors. Was the risk without at minimum a common sense assessment worth the end result? Could a few more minutes of data gathering to include in the assessment process been well worth a better outcome?

Using common sense to identify the risks in a trusted relationship will frame how we should view and handle each situation along with how the data and confidential information we store, transmit, or process should be handled and controlled. The methodology, the process of using common sense, provides us a very simple and basic ability to better assess and ultimately, better manage risk.

Data Privacy

Because “data privacy” is a phrase, rather than a word, it is not likely that it will be found in the dictionary like the words we defined earlier. However, one way to tell when a phrase started to gain wide use is to use the Web site www.newspaperarchive.com and search for the phrase. Although this is normally a pay Web site, it is available free when possessing a local library card. Use the site simply by searching for the word or phrase and sorting the results by date.

As for “data privacy,” there are a few uses that lie outside of our familiar use in computer context, which dates back to the early 1950s. Usually, these terms are introduced in technical journals first and gradually find their way into mainstream use such as newspapers. The first computer-related use of the term appeared in newspapers around 1971.

So starting about 15–20 years before the Internet, before anyone began commercially using the World Wide Web, “WWW,” the term “data privacy” was already being discussed in information technology professional circles.

Data Handling

Let us dive briefly into the history of how data was handled in the past and then move forward to how data is viewed today.

As was mentioned previously, individuals trust the companies they work for and those organizations have clearly let their employees down by not adequately protecting their personal information. This is probably why the government has established regulations on data privacy and the recommended secure data handling practices that are in place today.

Recently, while going through a few personal files from my past that I was considering discarding, I noticed that my old pay stubs included the SSN, my medical cards included the SSN, and even most of my medical bills included my SSN printed on them. To safely discard these files and manage the potential risk of releasing my personal information to protect my privacy, I was going to have to shred the documents. Here is another simple example of how freely people’s personal information was used by many businesses in several critical industries. Thinking more on this brief trip back in history, it is not hard to remember how much we took our personal information for granted.

Today, we expect that the personally identifiable data we disclose to the company where we work or the many organizations where we are required to share our personal data, it will be much better guarded than it was in our past, or is it? Not all that long ago there was as lot of assumption that the data was protected, but as the number of breaches grew, it was easy to realize that there are still some gaps existing with data handling. Too often there is a misunderstanding of the regulations that have been designed to improve the protection of nonpublic personally identifiable information (i.e., HIPAA, GLBA, FERPA).

Just the other day I took my daughter to a new oral and maxillofacial surgeon to set up an appointment to have her wisdom teeth surgically removed. We were handed a form that was mainly directed at my daughter to fill out, but there was a section for the parent/guardian to fill out. There was a line right after my name providing space for my SSN. I asked the individual at the desk, do you really need my SSN on this form? You have my medical card and my dental card, is not that good enough? She indicated that it was required for billing purposes. I subsequently asked how the information on the form would be handled from a privacy perspective. She confidently stated that the information was protected under HIPAA regulations and would be handled appropriately. She was quick to ask if I had read the privacy notice; I did not say any more because I did not want to get into a heated discussion, but HIPAA is designed to regulate the protection of health data about their patients, not necessarily addressing my personal billing information. I was not the patient and my SSN in combination with my name is not technically considered PHI (Protected Health Information). In that case, how does that make it suddenly fall under the HIPAA regulations? As such, this makes me wonder what will establish requirements for protecting my (nonpatient) information within that organization. Granted my information, although confidential, would not have regulated protection to ensure that it is handled appropriately. Do we simply trust that proper controls will be in place?

Moral of the story, there are still a lot of gaps in reasonable data handling, gaps in how organizations are training their staff to handle the data, and gaps in how the organizational information security programs are managing risk. The gaps carryover to the regulations that we assume protect all confidential data; HIPAA = protected health information, GLBA = nonpublic personal information, SOX = financial data and financial statements, PCI/DSS = credit card data. Way too often each of these regulations puts blinders on an organization that looks to purely “comply” with a regulation but ignore the protection of data that does not directly fall into one of the categories of data they are regulated to protect. As an example, bank examiners will review the banks for compliance with GLBA. The workpapers that examiners/auditors use when reviewing controls only consider how the banks handle customer data, they do not care how the organization handles employee’s personal data or data from other sources that, although not regulated, still requires privacy and security considerations. Having personally audited numerous banking organizations to the GLBA regulation, from personal experience, the bank has reached compliance, e.g., with incident response requirements because they have created an incident response plan that describes what to do should customer data be breached, but the plan has very limited information covering any other computer incidents. By regulation, they are compliant, but may not in my opinion have adequate controls in place for responding to incidents.

View on Information Security

In our history; just to qualify how far back we are talking, we only need to go back to 1997. Much less so today, but back in 1997 until about 2004, many organizations covering many industries would have laughed when asked if they were interested in completing an information security assessment or even a basic technical network vulnerability assessment. This included organizations ranging from the manufacturing industry to retail and banking.

In 2001, we even tried to give customers a “free” technical network vulnerability assessment of their perimeter. This was offered as a value addition for other purchases they were already making. We had to discontinue the practice because it was beginning to take more sales effort just to “sell” (maybe it really was educate) to our customers on the value of getting their perimeter controls reviewed regularly because of the potential risk of attack from the Internet.

Why might this have been the case?

Many in the industry did not understand what a vulnerability assessment was a short decade ago. Now regulations such as the PCI DSS have set compliance requirements on organizations to perform at least quarterly vulnerability assessments for both internal and external networks.

Security awareness, security policy, and related security processes are interesting concepts, unlike pieces of technology, applications, or systems, you are trying to work with and guide something that has incredible dynamics and complexity, people—the human being. Let us call it the “human firewall.” This is often the last bastion protecting an organization’s data. Prepare the human firewall with information security awareness and test the controls of the human firewall with social engineering—phone and e-mail are sample common tests.

Technology, such as firewalls, routers, applications, and intrusion prevention devices and systems, has technical means to block or permit access to data and secure confidential information. Unfortunately, after deploying thousands of dollars worth of technical controls targeted at protecting the data and properly securing information assets, it can all be breached in less than 5 minutes by simply having one person in your organization say the wrong thing at the wrong time to the wrong person. Compliance may have been technically met, but all for naught if the data is compromised by careless people.

Human beings are walking vulnerabilities capable of spewing out lots of information and are virtually impossible to control. To secure the information held within humans, we would have to be able to treat them like machines, which they are not, and even so, there are still sufficient struggles to be faced with securing the data located on computing systems in the first place, but that also means you cannot forget to educate and train your users. The best way for people to learn how to protect data is to use Ken’s Golden Rule: “Treat all data you work with like it is data about yourself or your family and you will provide it adequate protections!”

So how does an organization control the flow of information?

Can the information contained within us be managed?

The control that covers information security is policy. It provides the framework for the information security program. Policy provides definition for how people handle and treat the data they access and come into contact with each and every day. Organizations perform patch management and manage risk by establishing an information security program made up of various information security policies and standards along with supporting procedures and guidelines. The objective of the information security program is to establish the importance that executive management places on information assets, adequate controls, and compliance with regulations. The company must clearly assert that there is significant value (and importance) placed on protecting its assets, consisting of business process, raw data, customer information, and physical facilities. The components that make up the program may consist of policy, standards, procedures, and guidelines. The program must have at least one policy statement to establish that information is important and must be protected and have the procedures to support the protection. To further clarify;Policies provide the directive statements that outline the information security objectives in topical areas. Standards may be used to provide added operational detail requirements to further support the policy statements. Procedures and guidelines document the instructive ways to implement and comply with policy or meet required security standards. It is best never to mix policy with procedures. Making policy that can be clearly communicated at all levels in the organization is critical to the success of the information security control. Policy must be read, understood, and everyone must accept the organization’s expectations to comply.

As noted, procedures dictate the process in support of policy and provide steps for how information should be handled. Policy and Procedures are required for an effective information security program. Technology can be used to control access to sensitive data based on access controls and the authentication of user accounts for access to the organization’s systems or applications. However, the authentication process has a major flaw in that people hold the keys for proper access control. They hold the keys and are often the weakest link. Regardless how much money is spent on technical controls, a weak human firewall can result in other controls being bypassed.

We have seen that even with the best of intentions, the human firewall is an easy target for social engineering attacks, simple mistakes, poor computing practices, and attacks from malicious code such as phishing and deception. It is human nature to create trusted relationships with other people. By nature, humans are trusting and the trust is increased by the customer service training that instills in people the importance of being friendly, the focus placed on customer satisfaction and being helpful. The challenge is that not all individuals have good intentions with the customer relationships they form and especially with how they handle the information received or gathered. Attackers prey on this natural trust tendency during the course of what would seem to be a normal conversation between two individuals, when in reality it is an attack; using data mining and the trust that the attacker uses to gain access to information they might not otherwise have access to, another of the risks of trusting.

So how do we bridge the gap between people and securing information?

At this point, having absolute control over how humans communicate and how they use and disseminate information is never likely to be attained, so there is no absolute way to control the flow of information within the organization. However, we can monitor and track activity verifying that data is being handled appropriately to reduce the overall risk of trusted relationships; employer to employee, employee to employee, employee to employer, the organization to customers, and so on.

As already mentioned, because confidential data (i.e., SSNs) was historically so poorly handled by many organizations and industries, in the grand scheme of things, unless you were born 5–10 years ago, there is a strong likelihood that your data was already compromised and the probability could be quite high that a large volume of personal information is sitting out on storage media somewhere just waiting to be released or used in some malicious manner. Does that mean that there are a lot of baby boomers just waiting to retire on the money they have spent their lives saving, but when the time comes, someone else will have already made plans for their retirement money by using the breached data?

Trust, but Monitor

In order for organizations to assess how well they are functioning, frequent tests of internal controls and security must be conducted. The results should be used to determine where controls are efficient and effective and where new controls must be implemented.

Overall monitoring and logging operations should be established. The monitoring operations should be designed to produce results that are logical and objective. Test results that indicate an unacceptable risk in an institution’s security should be traceable to the actions subsequently taken to reduce the risk and improve controls. Tests should be thorough, providing assurance that the security plan and internal controls are meeting objectives. Testing is an ongoing process and should be frequent enough to encourage a proactive process and increase the potential for more accurate control testing results.

Policies, standards, plans, and procedures must be audited regularly to determine control deficiencies that can be repaired so that the security program can be enhanced and improved. Introduced in this statement is the concept of “plans,” which refer to documentation such as business continuity/disaster recovery plans, incident response plans, or a vendor management plan. Overall, the security program should be tested frequently and with a variety of tests to ensure protection from both internal and external sources as well as technical and nontechnical attacks. Vulnerability management, user identity management, business continuity, and incident management are all parts of the bigger concept of risk management. Other components such as performance monitoring, monitoring of risk, and monitoring compliance by employees, vendors, or other third parties with security plans, laws, and regulations are all important factors for an effective program. A key to accomplishing this is monitoring access and activities by users and computers in the computing environment. To make monitoring possible, we have to consider whether there is logging of information that can be monitored. Operating systems, databases, firewalls, and even applications must have logging of events occur in order for most monitoring to be successful.

To provide a level of due diligence that can be illustrated by an organization so they can show that they are taking reasonable measures to provide data security and privacy for their customers and employees, an organization should not only manage, but monitor internal controls to ensure that employees are complying with policy and not engaging in illegal or immoral activities. Monitoring helps provide a proactive measure to identify that the human firewall has put confidential information and most importantly, the organization at risk. The risk of data loss, risk of financial loss, and the more difficult risk to assign a dollar value to reputational risk are all potential concerns.

Most industry regulations require monitoring and measurement to some degree, setting expectations for organizations to track what is happening in the computing environment and implement incident response programs that can react to situations that are detected as unusual or warranting follow-up with appropriate actions.

Let us use schools for a case study. Remember when the hall monitor could see the bullies down the hall. White T-shirts with sleeves rolled up, with a pack of cigarettes, huddled together, a little pushing and shoving was happening, and there was this one poor individual who seemed to be the center of the attention. This undesirable bullying activity was quite easy to monitor and break up by dispersing the group.

Now fast-forward to today, how would the school or an organization accomplish similar monitoring of the bullying or malicious, maybe fraudulent, activity that occurs today? We can obviously start with knowing that somehow technology will be involved. The halls have moved to cyberspace and the criminal and the bullies have too—Facebook, MySpace, Twitter, LinkedIn, instant messaging, and e-mail; all popular places to frequent and use for malicious activity. In the corporate environment, the places may be less personal or social in use and could simply be our enterprise applications, databases, or the administrative tools that are used to manage our computing environments. These are all common places requiring levels of monitoring.

Are the activities in the cyberhalls or of our employees when using company resources tracked? “Trust, but verify,” a signature phrase Ronald Reagan used famously and frequently during his presidency. He employed it in public, although he was not the first person known to use it. President Reagan’s signature phrase has fresh currency in these times of cyber warfare and computer fraud. In the fall of 2010, Forrester released research studies and articles hinting at a new concept of “zero trust.” Is it really all that different from the idea of “trust, but verify.” It is easy to do the first (trust), but how does a company or should a company accomplish the second (verify)?

Today, it is critical for organizations to have the ability to know what is happening in their network, on their systems, with the applications, or in their databases. The need to know who changed what and when is often difficult; compounded by few standards, technical jargon, and maybe just downright nonexistent logging or performance problems with logging access activities. Many tools for correlating logs are still complex and quite costly for wide adoption. If the information can be brought together, it is probably followed by the basic difficulty of making sense of the technical content of the data gathered.

Thinking back to 1996 as a network manager of a large call center that took technical support calls for several large organizations. There were approximately 500+ technical individuals taking level 1 and 2 support calls 24 hours a day, 7 days a week, and 365 days a year. To say that these types of individuals love to experiment with technology would be an understatement. One day, the network team receives a call complaining about a network slowdown. The issue continued to repeat itself but was sporadic. The network manager put in place a network sniffer to assist in identifying the cause.

Network sniffers are programs that monitor and analyze network traffic, detecting bottlenecks and problems. They can be a self-contained software program or a hardware device with the appropriate software programming. Sniffers typically act as network probes to examine network traffic and make copies of the data without redirecting or altering it.

Now a plan of action was prepared just in case the “network slow down” began again. Then it happened, the call came in that the network was running slow, calling the network room, the sniffer was turned on and began capturing the sudden flood of traffic. Quickly, it was possible to identify the source of the additional traffic by the IP address of the device where it was originating. At that time, every system deployed used what was known as static IP addressing.

Static IP addressing involves manually configuring a unique IP address for each computer versus the use of DHCP (Dynamic Host Configuration Protocol), which uses an application to dynamically assign IP addresses. With dynamic addressing, a device can have a different IP address every time it connects to the network.

By checking the IP address mapping that charted location by floor, row, and cubicle, the network manager was able to very quickly go physically to the location to find out what was happening. The events were very cryptic and it was difficult to determine exactly what might have been happening based on the traffic alone. On arrival at the cubicle, the support representative was on a customer call. One of the desktops was being used to track the problem with the customer and the other appeared to be running some kind of tool that was running against the network. The representative turned around and upon noticing the network manager, immediately hit the power off button to shut the desktop system off. Under questioning about what was going on with the powered off desktop, the representative simply responded with a dumb founded look and said “What????” The bottom line is that he would not admit to anything special going on and did not indicate what the spare system was doing. Well the tool that the individual was running over the course of about 3 weeks turned out to be a “get admin” type tool that was very network resource intensive.

“Get admin” tools are applications specifically designed for attempting to gather the administrative password from an independent workstation, server, or network operating environment. More specifically, they were often used as hacking tools used to gain unauthorized access.

To determine what was being executed on that system took another week and several hours of research and other employees physically watching the activities of the rogue employee until they visually were able to see what was being run on the system and what appeared to be happening. We were able to obtain access to further evaluate the workstation when the representative was on a break and had not logged out or locked the workstation (a temporary moment of forgetfulness). It was almost as though the Network Manager was the “bad guy” by simply trying to find out what was causing the network performance problems and impacting the ability to supply customer service.

With a good monitoring solution, a great deal of time could have been saved in the above experience. There are monitoring tools, such as Sergeant Laboratories, Aristotle, or one of the complex security information event management (SIEM) type systems, which would have been able to identify when the malicious application was downloaded, installed, and run, solving the confusion and the problem of who did what, when, and where. There would have been a quicker time to resolution followed by a nearly zero effort in investigation other than running a simple report. Then to top it off, the log data could already have been gathered, chain of custody maintained, and digital forensics requirements met to support the termination of the employee or maybe a criminal case. Some SIEM tools manage to mitigate at least some of the technical knowledge required to make sense of the events collected.

Having the ability to monitor and quickly report what is going on within a networking environment can save an organization from potential data breaches and identify risks before they escalate. The monitoring data can save significant cost when gathering evidence to support a criminal case. Best, if possible, is having multiple layers of monitoring, something close to the user to identify unusual activity backed up by individual detail logs at the other application, database, or operating system layers.

Is there a potential in organizations for proprietary information to be removed by an engineer who coincidentally just began a new job at the competition? Or maybe it is a simpler case and bank examiners are just asking the Information Security Officer for the evidence to illustrate day-to-day activities being performed by authorized users of the system, application, or maybe more specifically of the system, network, or database administrators. If an organization has outsourced even only a portion of their information technology support, it is critical to monitor those highly trusted third-party vendor consultants. Often they have privileged access and there is limited logging in place to allow tracking for purposes of change management.

Organizations scan the infrastructure including the firewall, Web servers, databases, and applications for vulnerabilities to verify secure coding or determine exploits, but how do we scan the human firewall? The human firewall (i.e., employees, contractors, and even our family) remains the weak link in any technical implementation of security controls. Organizations can do this by trusting that employee’s intent is pure, but verifying that their activities follow acceptable use and meet compliance with policy through monitoring their computer use behavior.

Can we see when there is abnormal activity at odd hours?

Can we identify what even looks abnormal? Do we know what is normal?

Do we know when programs or browser add-ons are being installed?

How would we know which administrator made the user access changes?

Can we tell when a USB device or maybe the CD/DVD burner is used?

Historically, information technology has felt the need to protect users from themselves, by implementing technology to stop or restrict an activity when it is attempted, often reducing functionality or productivity. Powerful technology tools are implemented with the intent to control the features and functionality that may be considered risky. In recent years, there has become more and more end-user education. Better trained, technology savvy users. Not only can users be taught how best to use applications, systems, database, etc., appropriately, but how to use them in compliance with policy. It was not all that long ago we used to hear the information security officers state, “It isn’t worth trying to train the user they can’t even remember their password, how are they going to learn how to use the technology?”

Monitoring for the activities that users perform can be tough. Several applications do not support basic monitoring of changes by administrative-type users. Often database performance is impacted when the auditing feature is turned on. Storage can be an issue on systems and the data collected will be very cryptic and it can be even tougher when you do not know what to look for. Imagine trying to investigate an incident when you do not even have the data to analyze.

Forrester Research in September identified a so-called “zero trust” model for security. This model has revived debate about the way organizations secure their networks. The concept of “zero trust” means that end users are no more trusted than outsiders. As such it is important for organizations to monitor user traffic, from the outside as well as the inside. With this model, security becomes an even more integral part of the network.

There are usually two excuses why organizations do not monitor user activity in some way.

1. The first reason is that organizations do not have the manpower to devote to log monitoring. I would not want to allocate manpower to watching logs as they scroll by on the screen. It would be a boring, monotonous job and it is not likely the person would even understand the logged data or recognize what was abnormal or what they should be looking for.

Have you ever had to deal with a security breach? Does your incident response plan actually provide for a way to identify that an incident is suspected or does it magically just decide that customer data has been breached and you need to start notifications? How do you investigate what happened and do you have forensic evidence of a possible crime? Investigations can be lengthy (and costly), and the more systems or users involved, the longer it takes.

Having the right user activity data to know what happens in your network, applications, databases, and systems can make it possible to not only detect data breaches, identify noncompliance with end-user policy, but also detect and provide evidence of criminal activity using the computer.

With the right tools, your organization could prevent a breach or at least prevent serious collateral damage.

2. The second excuse is that monitoring solutions are too expensive. How expensive is an incident if you do not catch it quickly? What happens if you do not have enough evidence of what happened or if you cannot meet examiner monitoring requirements?

At minimum, network server logging should be turned on to gather as much information as possible. Then at least maybe you will have lots of information for when the stuff hits the fan or when you try to explain why you cannot figure out what happened or how long it has been going on for.

There are solutions available to easily and reasonably monitor user activity. Sure there are also the very expensive vendor solutions to centralize logs and those often take a lot of effort to manage. Not many organizations can afford the price tag or the resources.

As new legislation, such as the looming PCI deadline and tracking of user’s (especially administrative) activity, becomes a hot button for examiners, logging is something organizations will be scrambling to implement as they try to better protect customer information. Simply logging is not enough, showing evidence that review of logs is next and having the technical knowledge to understand the content will become issues. It is not enough just to generate the reports of activity and file them, someone has to look at them or monitor the data interactively.

Another aspect to “zero trust” is how we personally handle our private information. Each individual has a responsibility to protect themselves against the constant barrage of scams and phishing attacks through personal e-mail and links embedded within the Web sites we visit that have the potential for downloading Trojans and other applications (i.e., keyloggers, botnets) to capture our information.

In spring 2011, Epsilon, the largest distributor of permission-based e-mail in the world, had a data breach revealing billions of e-mail addresses. These types of breaches open the door for a surge of targeted phishing attacks often referred to as “spear phishing.” Not only when these types of breaches occur should an individual exercise a healthy dose of cautious skepticism toward any e-mails that they receive directly to their home e-mail accounts, even when you are a customer of the company allegedly sending the e-mail or even when the e-mail looks convincingly legitimate, do not trust it.

Many people have had this happen to them more than once, a call comes from their wife stating, “I was browsing this website looking for Old English sheepdogs and this thing popped up, so I clicked on it and now the screen is blank. What should I do?” As one could guess, malicious code was installed and wiped out the system. Typically, the first response might be, “You shouldn’t have clicked on anything” where the response always received is, “Well how am I to know that?” Most everyone at some time has had the “false alert trojan” pop up. One time comes to mind watching scores during NCAA basketball’s March Madness from a popular and legitimate Web site for sports.

It is important to maintain up-to-date antivirus and even personal firewalls with some level of basic intrusion prevention. Be careful, just implementing a tool can lead to overconfidence and a sense that now the system cannot be attacked. These tools alone are not a silver bullet, they do not catch everything, and hackers continue to get cleverer about how to compromise systems. And there is still the Human Firewall to keep up to date.

In its simplest form, security can be defined as the state of being free from unacceptable risk. The journey to a secure computing environment remains exciting, with a never-ending goal of managing risk in mind. Implementing a proactive yet flexible security program, which includes proactive user and computer activity monitoring, will allow both organizations and individuals at home to foster a model of “zero trust.”

About the Authors

Ken Shaurette is an experienced security and audit professional with a strong understanding of complex computing environments, legislative and regulatory requirements, and security solutions. He is a founding member and past president of Western Wisconsin InfraGard Chapter, past president of ISSA–Milwaukee (International Systems Security Association), current president and founding member of ISSA–Madison, past chairman of MATC Milwaukee Security Specialist Curriculum Advisory Committee, member of Herzing University’s Department of Homeland Security Degree Program, and member of Western Wisconsin Association of Computer Crime Investigators (WWACCI). He has security information published in several books and trade magazines. In his spare time, he works as director of IT Services for Financial Institution Products Corporation (FIPCO®), a subsidiary of the Wisconsin Bankers Association. If you would like to contact Ken, he can be reached via e-mail at kshaurette@charter.net.

Thomas J. Schleppenbach, CISSP, CISM, is a senior information security advisor with over 20 total years of IT experience. He is a trained IT auditor and assessor, who focuses on helping organizations with secure infrastructure design. He provides strategic security advice that helps organizations plan and build information security programs for compliance with legal and regulatory requirements. He is a member of the Western Wisconsin Chapter of InfraGard Executive planning committee and a member of the Wisconsin Association of Computer Crime Investigators (WACCI). For questions or comments, contact Tom at tjschlepp@schleppenbach.org.

References

Kindervag, J. No more chewy centers: Introducing the zero trust model of information security. Forrester Research, September 2010. http://www.forrester.com/rb/Research no_more_chewy_centers_ introducing_zero_trust/q/id/56682/t/2.

Merriam-Webster Online. http://www.merriam-webster.com/dictionary.

Wagley, J. Zero trust model. Security Management. http://www.securitymanagement.com/article/zerotrust-model-007894.