Combined DNA Index System Operational and Laboratory Vulnerabilities
Audit Report 06-32
May 2006
Office of the Inspector General
The following guidance was provided to the survey respondents at the beginning of the survey: As a rule, please select only one answer to each of the survey questions. Guidance on how to interpret the question is presented in italics. Note that throughout the survey, “NDIS requirements” is used to refer to all of the requirements with which an NDIS-participating laboratory has to comply to use and maintain their CODIS system, including the NDIS operating procedures and NDIS data acceptance standards. However, we do distinguish NDIS requirements from the QAS, even though compliance with the QAS is required for NDIS participation. Survey respondents were also instructed to provide their responses directly to the OIG, with no copy to any other organization, such as the FBI, the National Institute of Justice, or accrediting organizations. Below, we describe our strategy for tallying the survey responses, as well as give a list of the survey results, by question. Tallying of Survey Responses A few different systems were devised that would allow us to summarize the survey results as well as calculate averages and percentages for various questions. The system was based on the type of question and the calculation that would best portray the results of that question. We tallied the number of “yes” and “no” responses for questions 5, 23, 27, 33, 36, 37, 38, 39, 40, and 41. In addition, we assigned a number value to questions, 1, 3, 4, 11, 16, 32, 34, 42, 44b, and 46, but the numbers had no positive or negative significance. Questions 7, 8a, 9, 10, 13, 14, 17, 18, 21, 22, 26, 28, 29, 35, 43, and 44a were assigned a numerical value where the numbers had a positive or negative implication, moving from negative to positive as the numbers became larger. For questions 2, 8b, 12, 19, 20, 30, and 45 we created our own alpha key (i.e., we assigned alphabetic designators, similar to acronyms, that allowed us to tally the responses in the limited space of our spreadsheet). Tallying results for question 15 was more complex than the other questions because there was not typically one correct response for the scenarios given in the question. T o each scenario, respondents could select "Yes," "Yes, under the following conditions," or "No, for the following reason(s)." As a result we developed a complex grading matrix that could help us evaluate the different factors that a respondent could cite to justify their response. We distinguished between primary, secondary, and peripheral factors that would need to be considered in evaluating each scenario. Some scenarios were simpler and did not have that many factors, but the more complex scenarios had many factors. From that we developed a grading scale where responses were graded based upon the number of factors that they provided. The scale was 1 to 5, with 1 = poor, 2 = marginal, 3 = adequate, 4 = good, and 5 = exceptional. We averaged the grades, but also tracked the number of "1" responses, or “poor” responses received. For questions 24 and 25 we summarized the comments into phrases. Then we categorized the phrases and grouped them into similar categories. For questions 6 and 31 the actual percentages and dates given by the respondents were used. In addition, 26 of the questions allowed respondents to provide comments. Some comments were modified slightly by auditors to correct for grammar and sentence structure. The 26 questions were 8, 13, 11, 17 (in two separate places), 20 (in two separate places), 23, 27, 28, 29, 30, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 44.b., 45 (in two separate places), and 46. The comments provided were analyzed and trends were identified for each question, as well as across all of the comments. Survey Results Described below are the results for each question of our survey, along with an explanation of the various options offered to the administrators with each question. Throughout the survey questions, italicized text was used to give instructions to the administrators on how to interpret our questions and proceed through the survey. Demographics
Ninety-five respondents were from LDIS laboratories and 49 were from SDIS laboratories. The average time the respondents were CODIS administrators was 3 to 5 years. The average size of respondents’ DNA laboratory was 6 to 10 positions. This includes all staff specific to the DNA portion of their laboratory. FBI CODIS Unit Responsiveness
_% confirmation on whether a profile is allowable for NDIS Percentages were highest for the purposes listed below.
While 139 responses were received to this question, a significant percentage marked “N/A” if the reason we offered did not apply to their usual contact with the FBI. The higher the percentage of “N/A” responses, the fewer the people who contacted the FBI on that topic. The percentage of “N/A” responses can be seen in the “N/A” column for each category. The chart also shows the average responses for each topic given. The rounded average responses ended up being in the latter two columns.
While 125 responses were received for this question, a significant percentage marked “N/A” if the purpose did not apply to their usual contact with the FBI. The higher the percentage of “N/A” the fewer the people who contacted the FBI on that topic. The percentage of “N/A” responses can be seen in the “N/A” column for each category and the rounded average responses ended up being in the latter two columns.
Fifty responses were received for question 10 and the average response was between moderate and minimal potential. We received 39 comments. Trends in the comments were that the CODIS Unit needs more staff, and the CODIS Unit should disseminate more info to the CODIS community via the CODIS Website or Criminal Justice Information System Wide Area Network (CJIS WAN). In addition, disseminating more information to the CODIS community via the CJIS WAN was also a comment trend identified when all 636 comments were analyzed. Specifically, our analysis showed 37 respondents made a total of 51 comments regarding posting information through the CJIS WAN. A related comment trend is that while the FBI’s accessibility and responsiveness has improved, more improvements are needed; our analysis showed 20 respondents made a total of 28 comments regarding the FBI’s inaccessibility and its untimely responses. Suggestions were made for the FBI to set standards on timeliness of responses and to have a mechanism for making sure all responses are addressed, similar to the type of standards or tracking that is done for the CODIS contractor's help desk with information technology questions. In situations where a response cannot be formulated in a timely fashion, suggestions were made for at least a response indicating something along the lines of "X person will respond by Y time with the information you requested." Allowability of DNA Profiles
If your answer to question 12 indicates you are partially or fully responsible for designating which profiles are uploaded to NDIS, please complete questions 13-15.
We received 130 responses to question 13 and the average response was “Routine.”
We received 130 responses to question 14 and the average response was closest to the “consistently confident” option. Each scenario offered the following options:
We used a grading scale that evaluated the quality of response we received from respondents, with 1 = poor and 5 = exceptional. We received 131 responses and the average grade was 3.4. We received 132 responses and the average grade was 3.6. We received 132 responses and the average grade was 4.3. We received 132 responses and the average answer was 4.1. We received 133 responses and the average answer was 3.5. We received 133 responses and the average grade was 3.7. For question 16 most respondents gave multiple answers, even though we asked for a single “final authority.” Consequently, the percentages overlap and do not total 100 percent, which is why we did not put percent labels on these charts, to preclude misinterpretation. However, the dominant responses are clear. “N/A” responses are not reflected but were few. In the preceding chart it is clear that CODIS administrators see the LDIS administrator as the primary authority over what goes into LDIS. In the preceding chart it is clear that CODIS administrators see the SDIS administrator as the primary authority over what goes into SDIS, although roughly one-third of the responses included an emphasis on the state law or policy. In the preceding chart we see NDIS is the only level where respondents weighed the national law or policy almost as heavily as they did the national representative.
The average response to question 17 was “No, not all laboratories have the same understanding, but community understanding is improving.” Administrators who said that the CODIS community does not have the same understanding had the option of providing additional comments, and 70 respondents did. Most of the respondents focused their answers on their participation in discussions at the National CODIS Conferences (NCC). Some took the perspective that the discussions further confused people, while others felt that the discussions helped by clarifying troublesome scenarios.
Twenty-three percent of respondents to question 18 said “Unsure or not applicable” and the majority of the remaining responses were “Yes, but they are the rare exception.”
Respondents gave multiple answers to question 19, and therefore we were not able to calculate true percentages. Instead, we focused on which sources of guidance were the top three named. We received 143 responses, with three options selected the most by respondents as all or part of their answer:
Since this question permitted multiple responses, the chart below is only intended to convey the magnitude of response. The three primary options we offered were selected pretty evenly, as ways that would help community understanding.
Laboratory Quality
1 = Poor, since there are still fundamental quality controls we fail to consistently apply OR we have one or more staff members that are not fully committed to QAS compliance. 2 = Fair, since we routinely apply most quality controls in our operations, but still need occasional improvement. All staff are committed to QAS compliance, but occasionally are not properly informed about the standards. 3 = Good, since we consistently apply all appropriate quality controls in our operations. All staff are fully committed to QAS compliance and are proficient in what those standards are. 4 = Excellent, since we apply all appropriate quality controls, and actively pursue enhancing those controls. All staff are committed to QAS compliance, are proficient in what those standards are, and are committed to surpassing those standards whenever warranted to ensure excellence. We received 143 responses to question 21 and the average answer was 3.6. 1 = Below average (the majority of laboratories surpass our laboratory) 2 = Average (our laboratory is comparable to the majority of laboratories) 3 = Above average (our laboratory surpasses the majority of laboratories) 4 = Outstanding (our laboratory is a leader in quality in the DNA community) 5 = Unsure or not applicable based upon limited experience We received 142 responses to question 22 and the average answer was 3.2. For question 23 we received 140 responses and the respondents who said “yes” had the opportunity to provide additional comments and 10 did. Comments included statements regarding weaknesses related to the following areas: General CODIS Operations
We received 126 responses to question 24 and the top challenges are listed below. We received 122 responses to question 25 and the most important successes are listed below.
The average answer was 4.5.
The average answer was 4 and, of the 143 responses we received to this question, 24 gave additional comments. The following numeric rating scale was used:
Of the 143 responses received, 21 provided additional comments and 27 percent said “N/A.” The average answer was 3.1. Trends in this section of the survey are listed below. NDIS Audit Review Panel
Note that this chart does not include a small number of “other” designations that were received, accompanied by supplemental comments. The comments further emphasized that individual interpretations of standards still exist. We received 137 responses: When we asked about the total time it took for the processing of their last completed external QAS audit, we observed that there was slightly less than one-third who said it took longer than 6 months, roughly one-third who said it took from 4 to 6 months and slightly more than one-third who said it took 0 to 3 months. This does not include the less than one percent of “N/A.” This information sheds some light on potential causes of delay in closing out audits, since nearly one-third of the respondents indicated that the Audit Review Panel had followed-up to get more corrective action documentation after the original submission by their laboratories. This question was conditional upon the response to the preceding question, therefore non-valid responses were disregarded. The numeric rating system used is below. 1 = Timeliness does not seem to be improving. 2 = Timeliness is improving slowly. 3 = Timeliness is actively improving. 4 = Necessary improvements have already been made. We received 83 responses and the average was 2.7, closest to the “actively improving” designation. Respondents also had the option of selecting “other” and providing a comment. Eleven of the 81 respondents provided comments and the trend showed that respondents had no basis to form an opinion as to whether the NDIS Audit Review Panel improved the timeliness of their reviews. FBI Guidance to the CODIS Community
Note this does not reflect the less than 1 percent of respondents who said “N/A.” The respondents who said “no” had the option of providing additional comments, and 31 did. The trend in the comments was interpretation of standards varies between auditors and the CODIS community as a whole. Note this does not reflect the less than 1 percent of respondents who said “N/A.” The respondents who selected “no” had the option of providing comments, and 20 did. The trends identified in the comments were interpretation of standards varies between auditors and the CODIS community as a whole and the CODIS Unit does not respond to questions in a timely manor. We received a total of 140 responses to question 38. The respondents who selected “no” had the option of providing additional comments, and 33 did. The trends identified in the comments were interpretation of standards varies between auditors and the CODIS community as a whole and the standards and the audit document need to be updated. We received a total of 140 responses to question 39. The respondents who selected “no” had the option of providing additional comments, and 48 did. The trends identified in the comments were interpretation of standards varies between auditors and the CODIS community as a whole. We received 143 responses to question 40. The respondents who said “no” were given the opportunity to provide comments, and 29 did. We received 143 responses to question 41. The respondents who said “no” were given the opportunity to provide comments, and 26 did. The responses received about auditor qualifications to both question 40 and 41 were put into context with the comments that were provided. A few examples are below: The trends from comments we identified in question 42 are below: The numeric rating scale used for question 43 is below. 1 = Inconsistent. The messages conveyed at meetings or conferences do not match what is contained in written guidance or what is conveyed in individual responses. 2 = Somewhat consistent. The messages conveyed at meetings or conferences periodically match what is contained in written guidance or what is conveyed in individual responses. 3 = Consistent. The messages conveyed at meetings or conferences match what is contained in written guidance or what is conveyed in individual responses, with rare exception. 4 = Very consistent. The messages conveyed at meetings or conferences always match what is contained in written guidance and what is conveyed in individual responses. N/A = Unsure or not applicable based upon limited experience The average rating to this question was 2.8, which is closest to the “consistent” designation. In addition, 12 percent of the respondents said “N/A."
The categories in the preceding graphic were offered as responses and since multiple responses were permitted to this question we could not calculate true percentages. This graphic is intended only as a way to convey the magnitude of the responses given, by the 30 people who responded to this question.
Multiple responses were permitted for question 45, and the options displayed above were the ones we offered as responses. In addition there were a total of 47 respondents who provided additional comments. The trends identified in these comments are listed below: We received a total of 40 comments to question 46 and the trends we identified are listed below: Inconsistencies in interpretation in the standards throughout the CODIS community was also one of our comment trends; specifically we received 161 comments from 83 respondents on the subject. The second comment trend in question 46 regards resources that include personnel, better technology, and tools, such as expert systems. In addition, comments were made that indicated that the lack of resources force laboratories to make difficult decisions regarding resource allocation, and thus place pressure on quantity versus quality, which may not be best for the CODIS community as a whole. |
« Previous | Table of Contents | Next » |