IT IT
CSA Sentinel - The Institute Of Internal Auditors  

IN THIS ISSUE

PUBLISHED BY THE INSTITUE OF INTERNAL AUDITORS
VOLUME 5 · NUMBER 2 · JUNE 2001
printPrint Article
printPrint Entire Issue
 

CSA: A Journey Beyond Internal Control

By Jeff Haner, CPA
President
J. Haner Consulting

There are so many variations of the techniques collectively called control self-assessment (CSA), that it seems wise to begin any discussion about the topic with a definition of the term. In this instance, I speak of CSA as a collaborative process in which a facilitation team assists representatives of an organization in assessing how well they are achieving the objectives that are critical to their organization’s success. The primary assessment vehicle is a facilitated workshop in which participants discuss challenges and successes to identify opportunities for improvement.    

When properly planned and facilitated, the workshop setting is unlike anything I’ve seen in terms of its ability to establish a safe environment in which participants are willing to raise and objectively assess sensitive issues. CSA workshops often sound advance warning of critical breakdowns in an organization, providing an opportunity to resolve problems before they reach a crisis point. When leadership listens and responds to concerns raised in these workshops, there is often a positive effect on both employee morale and the organizational culture as a whole. To better describe the nature of the CSA process and its benefits — especially its ability to unite people — I prefer to call the process “collaborative self-assessment.”

Given the success that many audit practitioners have realized with CSA, the question naturally arises: Could CSA be employed for other critical nonaudit activities in an organization? After having several opportunities to apply the CSA process to other challenging organizational issues, I’d answer this question with an emphatic “Yes!” Here’s one example of how I’ve seen CSA successfully used to address a critical, but nonaudit need.

PLANNING THE ASSESSMENT
Earlier in my career, I was part of a team charged with assessing the suitability of a key software upgrade for certain users in the organization. This is a common activity that occurs every day in companies around the globe, but this particular case had a few unique wrinkles that complicated matters. 

The older software version was an aging product that was no longer meeting the needs of many of the users, so there was a clear need for action. However, there were two separate groups within the user base, each with its own set of priorities and needs. After a period of several years, the vendor released a new version of the software that contained significant — even radical — changes. We were very concerned that the upgrade would no longer meet the needs of our users.

Knowing that the software was used to perform a specific function, our project team members started the assessment process by identifying key characteristics that would be necessary for any software product to be used for this purpose. Fortunately, we had access to a list of several hundred product enhancement requests and complaints from the users that had been compiled since the last upgrade. 

By examining the list, consulting with key users, and using our own knowledge of the product, we identified more than 20 broad software requirement categories that needed to be evaluated. The categories were characteristics such as “user interface,” and “reporting templates.” They were roughly equivalent to key objectives that are assessed in a typical CSA audit project.

After our team determined what categories to evaluate, we began to plan how to assess them. To capture our discussions in writing, we chose the standard two-column, T-account worksheets commonly used in CSA workshops. Positive comments (pros) about the software are placed in the left column, and negative comments (cons) are recorded in the right. A small section at the bottom of the worksheet is reserved for recommendations.

T-account Worksheet 

 Software Requirement

 Pros

Cons

 

 

 

 

 Recommendations 

 

 

 

 

 

We also developed the following three criteria for use with an OptionFinderÒ anonymous voting system to assess each requirement category.

  • Importance      
  • Actual usability      
  • Desired usability

Because representatives from both user groups were scheduled to attend the same workshop, we drafted demographic questions to identify which group each participant represented and the level of his or her experience with the old version of the software.

(Note: OptionFinder is one of several wireless keypad systems that enables participants to respond anonymously to multiple-choice questions. The system can be run from a laptop computer, and its questions and response graphs are typically projected on a screen for all to see. Due to the anonymity it provides, it is particularly helpful for encouraging objective discussion of sensitive issues.)

GATHERING USER FEEDBACK
Twelve participants attended the actual assessment, which took place over a course of two days. The first day was devoted to training and familiarization with the new software. In the morning, representatives from the vendor conducted an initial demonstration followed by a more detailed training session. 

In the afternoon, each participant received a copy of the new software, which they used individually for the remainder of the day to test how well they could perform their normal tasks. The vendor representatives remained throughout the afternoon to answer questions and provide assistance as needed.

On the second day, our project team led the participants through a facilitated workshop. We began the workshop by using OptionFinder to ask demographics questions. The participants identified which user group they represented and reported their level of experience with the older version of the software.

We knew that there would not be enough time to have a detailed discussion about each of the 20 identified categories. Therefore, to ensure that we covered the most critical areas, we used OptionFinder to rate the importance of each software requirement category on a scale of one to seven. The importance ratings helped us prioritize the categories and determine the order in which to present them to the group.

We then spent the bulk of the second day assessing the usability of the software. Starting with the most important category, we held a general discussion about the key pros and cons that each participant had observed while using the software during the previous day. One of the project team members used a laptop computer to record comments on a T-account worksheet. The worksheet was projected onto a screen so that all participants could view the information that was being recorded. 

When the participants finished discussing the key issues for a particular category, they used OptionFinder to respond to the following two questions.

  • What is the actual usability?        
  • What is the desired usability?

Responses were recorded on a scale of one to seven. By subtracting the actual rating from the desired rating, our project team could quantify the opportunity gap that existed between the actual usability of the software and the participants’ expectations. For example, if the desired usability rating was 6.5 and the actual usability rating was 5.2, the opportunity gap would be 1.3 (6.5 – 5.2 = 1.3). 

The actual usability and gap ratings can also be expressed as a percentage. For example, the actual rating (5.2) divided by the desired rating (6.5) would yield an 80 percent actual usability score (5.2/6.5 = 0.8). Additionally, the opportunity gap (1.3) divided by the desired rating (6.5) would yield a 20 percent gap (1.3/6.5 = 0.2).

Following a vote, the participants moved to the next category and repeated the process until they had assessed all of the categories. The amount of discussion time for each category ranged from approximately five to 40 minutes and seemed to correlated with the importance rating. Those categories with higher importance ratings that were presented earlier in the day prompted lively discussions, and those with lower ratings drew fewer comments.

At the end of the day, after the participants had developed a level of trust with the process and with one another, our project team closed the meeting by using OptionFinder to ask the following multiple-choice questions.

1.      Please rate your level of agreement with the following statement. 

This software will put me in a better position to do my job.

    • Strongly Disagree     
    • Disagree     
    • Not Sure     
    • Agree     
    • Strongly Agree

2.   How many current users should adopt this upgrade?

    • None     
    • Few     
    • Some     
    • All 

3.   Should we pursue another product?

    • No     
    • Possibly     
    • Definitely

 A short discussion followed these votes. We then thanked the participants for their time and adjourned the workshop.

DEVELOPING THE INITIAL PICTURE
Our project team began to analyze the results by reviewing the usability ratings. First, we averaged the usability ratings for all categories to determine the overall usability rating for the software. Then, we used our demographic information to slice the data and determine the overall usability rating for each user group. We found some intriguing differences.

Group A rated the software’s actual usability at more than 80 percent of its desired usability — a gap of less than 20 percent. Prior CSA experience has shown that a gap of 30 percent or greater usually indicates significant challenges. Group A’s gap was well within that threshold and seemed to indicate that the group had found the software to be appropriate for their purposes.

CSA 6-01 Pie Group A

On the other hand, Group B had very different results. Participants rated usability lower, which resulted in a much larger gap that approached 40 percent. (A closer look at the data revealed that the more experienced Group B members had rated usability even lower.)

CSA 6-01 Pie Group B

This large gap indicated that participants from Group B had serious concerns about the new software. 

BRINGING THE DETAIL INTO FOCUS
After looking at the snapshot that the overall usability rating provided, our team examined the individual ratings for each category. We found that by plotting the results on an XY graph — with the actual rating on the vertical axis (Y) and the desired rating on the horizontal axis (X) — we could show how the software fared against participant expectations, both overall and on a category-by-category basis. 

The graph that we used had a diagonal line that ran from the lower left corner to the upper right corner. This line, which started at the lowest usability rating possible (1-1) and ended at the highest rating possible (7-7), represented the theoretical perfect score for any point along the axis, where the actual usability would precisely match the desired usability. Ratings plotted below the line indicated opportunity gaps, and those above the line indicated actual scores that were greater than desired scores — negative gaps.

When Group A’s ratings were plotted on the graph, they formed a fairly well-contained cluster in the upper-right corner. Group A’s desired ratings were high, and their opportunity gaps were relatively small. A closer examination revealed that nearly one-third of the categories had an opportunity gap of less than 10 percent. Two categories actually had negative gaps. 

CSA 6-01 X-Y Group A02

Group B’s ratings were much different. When plotted on the graph, the ratings formed a more dispersed pattern that ranged from the upper right to lower right corners. This showed that Group B’s desired usability ratings were quite high, but their actual usability ratings were low. Half of the requirement categories had gaps of more than 30 percent, including seven of the 10 most important categories.

CSA 6-01 X-Y Group B 02

The contrast between the two charts was immediately apparent. We concluded that both groups had fairly high expectations. The software had done a reasonable job of meeting them for Group A; however, it met few of Group B’s expectations

EXAMINING THE DIFFERENCES
Although the OptionFinder ratings were useful for highlighting areas of concern, our team could not make decisions based on numbers alone. The scores didn’t reveal the specific concerns that had prompted the ratings and they didn’t tell us whether these concerns were valid. Therefore, to collect more information, we began to analyze the comments that were captured during the workshop.

On the pro side, we found that participants were happy with changes that had been made to the user interface, which made the software easier to use. Several new features were well received, and some existing capabilities were greatly improved. The participants felt that, in many respects, the new system had greater flexibility. They also indicated that the underlying technology was a great improvement over the previous version, in terms of system stability (the system did not crash as often) and its use of up-to-date technology. Both groups generally agreed on these pros.

Group B’s concerns surfaced when we examined the cons. Although participants recognized that the software had many improvements, several members of Group B questioned whether the increased flexibility had made the software unnecessarily complex. They expressed concern about features in the older version that were eliminated with the upgrade. In fact, several participants from Group B did not believe that the new system would support key activities that were being performed by the current system. They also expressed disappointment that the upgrade did not contain certain features that they believed should have been standard in any new software product of this type.

In our initial review of the OptionFinder results, we were concerned that Group B was simply exhibiting resistance to change. However, the participant comments revealed a key difference between the two user groups. Members of Group A used the older software version in a fairly uniform manner, consistent with its original design. Group B participants had begun using the old software to support new processes and perform new tasks for which it had not been intended. In some cases, Group B users had even linked other external programs to the old system.

The new system simply had not been designed to accommodate all of Group B’s specialized tasks and processes. Many Group B participants thought that we would be trying to “fit a square peg into a round hole” by adopting the upgrade. Although they recognized the need for a replacement, they believed that it would be more cost-effective to purchase another product or to develop a replacement in-house.

The responses to the OptionFinder questions we had asked at the end of the workshop confirmed the challenges cited by Group B. Less than half of Group B participants believed that the new software would put them in a better position to do their jobs. More than one-third had indicated that none of their users should receive the upgrade. Another third believed that only a few of their users should receive the upgrade. When asked, “Should we pursue another product?” the majority of Group B users voted, “definitely.”

MAKING THE DECISION
Ultimately, after our project team reported these findings, the two groups made different decisions about how to proceed. Group A decided to accept the upgrade, and Group B chose to reject it. Decision-makers in Group B carefully reviewed the concerns of their users relative to their job requirements and decided that the potential problems reflected in the large usability gaps were too severe to warrant purchasing the upgrade. Instead, they began developing their own software tools to better meet their users’ needs.

As for me, the CSA practitioner, I was pleasantly surprised by how well the process worked in this application. It greatly reduced the impact of organizational politics and brought more objectivity into the software purchasing decision. Since then, other members of the organization have repeated this approach for subsequent software development and purchasing decisions with similar, positive results.

Quick Poll

How has flextime work schedules impacted audit completion time for your agency?

Audits have been completed faster.

There has been no change.

Audits take longer to complete.

My agency does not have a flextime poilcy.



View Results