Abbildungen der Seite
PDF
EPUB

hearing days divided by the number of examiners available. For example if there is a backlog of 90 cases awaiting a hearing, and hearings take an average of 2 days, and there is an average of 3 hearing examiners available to conduct hearings, then after the pleading stage an incoming case will have to wait approximately 60 working days before receiving a hearing (i.e., 90 cases times 2 days divided by 3 examiners). Reducing the hearing days by 1 day will, therefore, reduce the waiting time by about 30 days (i.e., 90 times 1 divided by 3). For this reason it would be helpful if future data gathering would indicate fractions of days for the hearing days stage for each case instead of rounding off to whole days. It would also be helpful if data were available on the portion of the awaiting prelim stage that is actually spent in writing the preliminary decision and the portion of the awaiting final stage that is actually spent in writing the final decision.

With regard to the chain reaction of one stage on other stages, it should be mentioned that reducing the pleading time will merely tend to lengthen the awaiting hearing time if there is no shortening of hearing days or no decrease in the backlog or no increase in hearing examiners. Likewise reducing the proposed findings stage or the briefs stage will increase the awaiting preliminary or awaiting final decision stages unless the time required to write these decisions or the backlog awaiting such decision is reduced or unless there is an increase in decision writers. Nevertheless a reduction in the pleading, proposed findings, and briefs stages will be useful in putting pressure on the agencies to shorten the hearing and decision writing processes so that the time spent waiting for a hearing, preliminary decision, or final decision will at least be held constant.

B. WHAT COULD BE SHOWN

Although the hypothetical proceeding mentioned above was 67 percent improvable, such a simplified trend analysis does not take into consideration the possibility that the 1963 cases may have been 67 percent or more complex than the 1962 cases. The complexity indices for a given type of proceeding, however, do not tend to change substantially from one year to the next. Nevertheless a more realistic expected average would add or subtract some days from the 50 days of the expected average (which was based on the 1962 average) depending on the degree of positive or negative change in complexity between the 2 years. Suppose in our hypothetical illustration we use average transcript length as an index of complexity and transcript length increased from an average of 200 pages to 230 pages from 1962 to 1963 thereby providing a 15-percent increase in our index of complexity (i.e., 30 extra pages divided by the original 200). If so, one might expect 50 plus 8 days to be consumed in 1963 (i.e., 50 days plus 15 percent of 50 days).

One

Even this approach has some important defects that might be remedied in future compilations of these evaluation charts. defect of the above approach is that it only uses one of the three indexes of complexity. A second defect is that it assumes there is a perfect relation between a change in complexity and a change in time consumed. In reality a 15-percent increase in complexity may tend to produce a less than or greater than 15-percent increase in time consumed. The most meaningful way to predict how much time should

be consumed by a type of proceeding in 1963 if one knows (1) how much time was consumed in 1962 and (2) how much each of the three indexes of complexity changed from 1962 to 1963 is to proceed as follows:

Step 1. For each of the 61 types of proceedings, record on an IBM card four things; namely the percentage change in time consumption (symbolized AT which is pronounced "delta T"), the percentage change in transcript length (symbolized AL), the percentage change in the average number of exhibits (symbolized AE), and the percentage change in the average number of private parties (symbolized AP).

Step 2. These IBM cards can then be processed by what is known as a regression program available at nearly any computer installation where statistical calculations are made.

Step 3. The output of the computer run will mainly consist of four numbers which represent parts of an equation of the form AT=2+0.8(AL) +0.2(AE)+0.3 (AP). These numerical parts are called regression coefficients.

Step 4. With a regression equation like this, we can make about as accurate a prediction of AT from AL, AE, and AP as is statistically possible. For example, if a type of proceeding (1) increased its average transcript length by 15 percent, (2) decreased its average number of exhibits by 5 percent, and (3) increased its average number of parties by 20 percent, then one would predict from the above equation that the type of proceeding would probably increase its time consumed by approximately 19 percent since 19=2+.8(15)+0.2(−5)+0.3(20).

Step 5. To calculate the expected time for 1963, we then determine 19 percent of the 1962 figure and add this number to the 1962 figure. If 1962 consumed 50 days, we would thus expect 1963 to consume 60 days since 19 percent of 50 is 10.

Step 6. The process of creating an equation like that produced in steps 1 through 3 should be repeated for each of the nine procedural stages. The 9 equations should then be applied to each of the 61 types of proceedings as was done in steps 4 and 5. All six steps can be quickly handled by the standard regression programs mentioned in step 2.

In future compilations of evaluation charts, the year 1962 can continue to be used as a base year against which comparisons can be made. As an alternative one could use a different base year for each type of proceeding. A good base year would be the recent year in which the type of proceeding consumed the least amount of total time. This trend analysis approach might be referred to as the golden base-year approach. A more sophisticated approach (but one that is less of a stimulant to efficiency) might use all the recent good and bad years for which data was available. Standard computer programs exist that can fit a straight or a slightly curved line to the data points for multiple years so as to minimize the deviations from the points to the line, and they can also project the location of the last point. Thus if a type of proceeding consumed an average of 50 days in 1962, 66 days in 1963, and 74 in 1964, the computer output would logically indicate that the expected average for 1965 is about 78 or else it would provide an equation from which one could derive the 78. The 78 days could then be adjusted upward or downward to take into consideration the percentage change from the previous year with

regard to average transcript length, number of exhibits, and number of parties as was previously described.

Table 1 summarizes the basic trend analysis approaches that have been described on the previous pages. Examining the table and the formulas beneath it, should help to clarify the calculations involved.

TABLE 1.-Calculating an expected average via trend analysis

[blocks in formation]

2. More sophisticated average=(days in base year)+[(percentage change in transcript length)X(days in base year)]=50+(15 percent of 50)=58. 3. Most sophisticated average = (days in base year) +[(percentage change in predicted days)X(days in base year)]=50+(19 percent of 50)=60 where percentage change in predicted days=sum of [(computergenerated regression coefficients) X (percentage changes in complexity)]=2+.8(15) +0.2(−5) +0.3(20) = 19.

III. DISPERSION ANALYSIS

A. WHAT IS SHOWN

A second method of determining an average expected time to compare with an average actual time is to use the average actual time as a target time or a norm. Line IVB1 shows the average expected time if the cases that are dispersed or spread upward from the average actual time were reduced down to the average A deviant case is thus defined as one that is above average on time consumption. For example if a type of proceeding had three cases consuming 21, 9, and 6 days respectively, then the actual average would be 12 days (i.e., 36 days divided by 3). If the cases that were above this average were reduced to the average, one would expect the average to be 9 days (i.e., 12+9+6 days divided by 3). This represents a saving of 3 days or 25 percent (i.e., 3 days divided by 12). The more deviant the above-average cases are, the greater the improvability is. The sum of the days savable from each of the stages may be greater than the days savable in the total time because the cases tend to be more deviant at the individual stages than they are on the total time.

In the Interstate Commerce Commission, a target time system related to the above analysis is used to flag cases that may be consuming an excessive amount of time at a given stage. Initiating the system involved establishing how long a given type of proceeding (e.g., applications for transportation operating rights) usually takes to complete a given stage (e.g., 60 days for completion of the pleading stage). In other words the average time consumed was used as a norm on the theory that this was a reasonable expectation for the cases to meet. The ICC has a computer which every month prints out the docket numbers of the cases in each type of proceeding that have exceeded the target time of the stage at which they are located. These docket numbers are called to the attention of those who are

responsible for the types and stages involved. The target times are periodically adjusted to take policy changes and changes in the averages into consideration.

Trying to hold unusually long cases down to the average of their type of proceeding not only tends to lower the average, but it also results in more uniformity of treatment. Disuniformity is undesirable even if the average time consumed is low if the disuniformity reflects the fact that some cases are receiving more favorable treatment than others. For a simple way to measure the degree of disuniformity in a set of cases, one can determine the deviation of each case from the average, and then average these deviations ignoring the plus-orminus sign. Thus in the hypothetical type of proceeding mentioned above the deviations are 9, 3, and 6 respectively (i.a., 21-12, 9-12, and 6-12) giving an average deviation of 6 days. The average deviation can also be expressed as a percent of the average (i.e., 6 days divided by 12 equals 50 percent for comparison with other average deviations. From the data given in the evaluation charts, one can easily calculate the average deviation at a given stage by simply doubling the days savable on line IVB2. Thus in the above hypothetical the average deviation of 6 equals twice the 3 days savable. In other words where above average cases are considered deviant, days savable equals exactly half the average time deviation.

B. WHAT COULD BE SHOWN

The thing that is lacking in the calculation of the expected averages on line IVB1 is the failure to take into consideration the fact that the cases which are above average on time consumed may be equally or even more above average on complexity and thus not so readily reducible. In other words if the spread on complexity were considered, then the actual average of 12 days in the hypothetical example mentioned above might be more realistically reducible to an expected average of about 10 or 11 days rather than 9 days.

One meaningful way to consider complexity in determining the expected average for line IVB1 is to feed into a computer the time consumed and the transcript length for each case in each type of proceeding. The computer can then be given instructions to make the simple calculations shown in the hypothetical data given in table 2 below. For each case the computer should calculate (1) the percent by which the case deviates from the time average of its type of proceeding as is done in column 4 of the table, (2) the percent by which the case deviates from the complexity average of its type of proceeding as in column 7, (3) the difference between these two percents as in column 8, and (4) the product of this difference where it is a positive difference multiplied by the time consumed. The computer should then average these products in order to determine the average days savable which in the hypothetical data is 2 days. By subtracting this figure from the average actual time, one arrives at the expected average time. In the hypothetical data, this would yield an expected average of 10 days (i.e., 12 days minus 2) rather than the 9 days arrived at when the complexity spread was ignored.

This method not only attempts to indicate the extent to which above-average cases can be reduced (as with case no. 1 of the hypothetical data), but also the extent to which below average cases

can be even further reduced. Thus if a below average case is only 25 percent below the time average but is 45 percent below the complexity average, then it is still 20 percent improvable (as with case 2 of the hypothetical data). On the other hand if a case is above (or below) the time average but is even more above (or less below) the complexity average, then column 8 of the table will show a negative difference indicating the time consumed by that case is not readily improvable (as with case 3). In such a broadened method, deviant cases are in effect defined not as those that are above the average in time consumption, but as those that are more upward deviant on time consumption than they are on complexity.

For each case in all 61 types of proceedings, one could conceivably calculate 3 improvability percentages. One percentage could be based on transcript length as in column 8 of table 2, one based on the number of exhibits in a manner analogous to table 2, and one based on the number of parties. These three scores, however, could not be used as input into a regression program like that described in the trend analysis above or the comparative analysis below because there is no measure of time improvability against which they can be meaningfully correlated analogous to T in the T-L+E+P formula. Nevertheless the dispersion analysis approach is a useful method for evaluating the level and the uniformity of the time consumption data for each of the types of proceedings.

TABLE 2.-Calculating an expected average via dispersion analysis

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][subsumed][merged small][merged small][subsumed][subsumed][merged small][merged small][subsumed][subsumed][merged small][merged small][subsumed][subsumed][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

Simple expected average=(time average)-[(average absolute deviation)]=12-1⁄2 1⁄2 (9+3+6) =

=12-3-9.

Sophisticated expected average=(time average) — (days savable considering complexity deviation) = 12—

1.7-10.

IV. COMPARATIVE ANALYSIS

A. WHAT COULD BE SHOWN

The third set of expected averages are based on how well other types of proceedings do. The simplest form of this method would merely involve summing the average time consumed for each of the 61 types of proceedings and then dividing by 61. If this calculation produces an average total time of 290 days, then on line IVC1 under "total,"

« ZurückWeiter »