The External Effects of the Federal Bureau of Investigation’s Reprioritization Efforts
(Redacted for Public Release)
Audit Report 05-37
Office of the Inspector General
The objective of this audit was to assess the change in FBI investigative resources devoted to criminal areas and assess the impact of these changes on other federal, state, and local law enforcement entities.
Scope and Methodology
We performed our audit in accordance with the Government Auditing Standards issued by the Comptroller General of the United States and included such tests of the records and procedures that we considered necessary to accomplish the audit objective. The informational nature of our audit objective did not require that we perform testing of the FBI’s compliance with laws and regulations or overall internal control structure. To accomplish our objective, we interviewed officials at various law enforcement agencies, conducted a web-based survey of state and local law enforcement agencies, and analyzed computer-processed data from the FBI and Executive Office for United States Attorneys (EOUSA).
Much of our work centered on interviews with officials at various federal, state, and local law enforcement agencies, which were conducted at the headquarters and field office levels. These interviews, as well as documentation obtained during interviews, provided perspective on the effects that the FBI’s shifting priorities and resources had on it and the law enforcement community as a whole. In total, we interviewed 328 law enforcement representatives.
Of these interviews, 65 were conducted with executive personnel at federal agencies and programs in the Washington, D.C., area. Specifically, we spoke with 23 officials at FBI Headquarters, including the Executive Assistant Director for Law Enforcement Services and the Assistant Directors for the Criminal Investigative Division and the Office of Law Enforcement Coordination. We also spoke with FBI officials at the FBI Academy in Quantico, Virginia, and the National Joint Terrorism Task Force in McLean, Virginia. Additionally, we interviewed 42 headquarters representatives at the following federal law enforcement agencies and programs:
Further, we spoke with four officials at the following international and national law enforcement agency associations: International Association of Chiefs of Police (IACP), Major Cities Chiefs Association, and Major County Sheriff’s Association. We also spoke with 259 law enforcement representatives of federal, state, and local law enforcement agencies and departments during our visits to seven FBI field office jurisdictional areas. At each site, we interviewed officials at the FBI, ATF, DEA, ICE, U.S. Attorneys Office (USAO), and USMS. Further, we interviewed state and local law enforcement representatives at a minimum of five departments per site. For these state and local interviews, we judgmentally selected police departments based upon responses to our web-based survey, choosing agencies that indicated they had been either negatively or positively affected by the FBI’s reprioritization. Additionally, we spoke with the primary police department located in each city visited. For example, while in Chicago, we met with officials from the Chicago Police Department. The table in Appendix VII lists the agencies contacted at each location.
Survey and Computer-Processed Data
In an attempt to obtain more thorough insight on the effects the FBI’s reprioritization had on local law enforcement agencies, we developed and deployed a web-based survey to 3,514 state and local law enforcement agencies located in 12 FBI field office jurisdictions. Details regarding the survey are discussed later in this appendix.
To further understand the results of the FBI’s reprioritization, we analyzed data provided by the FBI and the EOUSA. Specifically, we conducted analyses of FBI statistical data on its resource allocation, resource utilization, and casework. Additionally, we requested and analyzed U.S. Attorney Office (USAO) data on the number of criminal matters the USAOs received from federal law enforcement agencies, particularly the FBI.
To examine the FBI’s human resource utilization, we examined data from the FBI’s Time Utilization Recordkeeping (TURK) system – a module of the FBI’s payroll system – for the period of September 26, 1999, through September 18, 2004.62 The TURK system contains work-hour and Average On-Board (AOB) data for most FBI agents and support involved with investigative matters. To examine the types and quantity of cases the FBI investigated for the same period, we analyzed data from the Automated Case Support (ACS) system.
In September 2003, we issued an audit report on FBI Casework and Human Resource Utilization.63 During that audit, we performed tests to establish the reliability of the computer-processed data from the TURK and ACS systems. For both systems, we reviewed management controls and we performed data validity tests at the FBI Chicago Division. Based on these test results and the FBI’s confirmation of data, we concluded the data was sufficiently reliable to achieve our audit objective.64 Therefore, we did not repeat this process for our current audit.
We performed analyses of FBI resource allocation, resource utilization, and casework data to identify trends and note significant changes in the FBI’s operations from September 26, 1999, through September 18, 2004. We also reviewed U.S. Attorney criminal matters data for FYs 2000 through 2004, as well as the responses to our web-based survey. In total, this data amounted to 2,752,582 records.
FBI Human Resources
We conducted analyses of FBI Funded Staffing Levels (FSL) and Agent On-Board data.
Funded Staffing Level – We used the FBI’s FSL figures established by the Resource Management and Allocation Office to analyze agent resource allocations. We obtained field division FSLs for each program and each fiscal year, for both agents and support personnel for FYs 2000 through 2004. We also received FSLs for FBI Headquarters, organized by Division level, for the same period. These FSLs represented the final allocations set for each fiscal year, reflecting any mid-year adjustments. We reviewed the FBI’s agent allocations, focusing on changes in FSLs between FYs 2000 and 2004. The total FSL data amounted to 9,834 records.
Average On-Board (AOB) – TURK generally records percentages of time worked for both agents and support personnel in the FBI’s 56 field offices (Headquarters personnel do not record their time in TURK). TURK data collection is divided into 13 TURK periods per fiscal year; each TURK period is 4 weeks. Each agent records the percentage of time worked each day according to FBI investigative classifications (the percentages are based on a 10-hour day for agents and an 8-hour day for support personnel).65 These percentages are recorded and the result is averaged to show time worked in a specific classification equivalent to a full‑time employee, which the FBI calls Average On-Board (AOB).
For example, if three agents within a particular field office each spent one-third of their time (33 percent) on Bank Robbery – FBI Investigative Classification 091A – within a given TURK period, the AOB for that field office (in Classification 091A, within the TURK period) would be equal to 1 agent AOB (100 percent of 1 agent-equivalent). The FBI considers the TURK system’s AOB data to be the best way to assess the actual time worked by FBI employees in specific FBI investigative programs, subprograms, and classifications. In this report, we use the term AOB and on-board agent interchangeably.
The FBI retroactively adds employee leave and miscellaneous time into the TURK record of each employee at the program/subprogram level. The FBI does this through use of an automated Investigative Program Allocator, which prorates the data back into each record based on that employee’s activity in the previous six pay periods. Therefore, to most accurately represent the FBI’s AOB actualities, we requested separate data runs for AOB at the FBI’s investigative classification level and at the FBI’s program/subprogram level. Hence, only when presenting data at the classification level do we use data from the classification runs.
The classification level data run was provided in a text file, which we imported into a database file. The data run contained 611,333 records, each containing the following fields:
The program level data run possessed the same fields noted above except for the Classification field. This data run, provided in a text file and imported into a database file, contained 410,902 records. We compared FYs 2000 and 2003 AOB figures at the program and classification level to the figures verified by the FBI in our Federal Bureau of Investigation Reprioritization report issued in September 2004 to confirm that our current data and analysis methodology were correct.
Based on analyses of the AOB data at both the program and classification levels, we judgmentally selected 12 FBI field divisions for possible locations to conduct additional work: Atlanta; Chicago; Dallas; Denver; Detroit; Los Angeles; Miami; New Orleans; New York City; Phoenix; San Francisco; and Washington, D.C. We requested unclassified AOB data runs at the classification and program levels according to these offices’ resident agencies. A resident agency is a satellite office to one of the FBI’s 56 field divisions. The unclassified classification level data run, provided in a text file, was imported into a database file containing 302,293 records, each including the following fields:
The unclassified program level Resident Agency data run for the 12 field divisions contained the same fields noted above except for the Classification field. This data run was supplied in a text file and imported into a database file containing 503,147 records.
Agent Utilization – We elected to analyze AOB data by fiscal year. To do this, we totaled the AOB for all TURK periods within each fiscal year for each investigative program, subprogram, or classification. Next, we divided this total by the number of TURK periods (13) to obtain the average agents working a particular program, subprogram, or classification in a given fiscal year.
Analysis at the Program/Subprogram Level – We evaluated AOB data to identify internal operational changes in FBI investigative efforts occurring as a result of the FBI’s reprioritization and internal reorganization. Therefore, to assess the change in agent utilization, we focused our analysis on comparing AOB totals between FYs 2000 and 2004, while looking for conspicuous differences in AOB for FYs 2003 and 2004. This approach afforded a view of AOB both before and well into the FBI’s reprioritization efforts, revealing the areas of greatest change in actual agent-time worked.
In order to accurately compare the change in agent utilization at the program level, we adjusted AOB data to reflect the FBI’s program composition during FY 2004. During FY 2004, the FBI initiated the Criminal Enterprise Plan, subsequently resulting in the restructuring of the FBI’s Criminal Investigative Division (CID). The implementation of this plan resulted in new program names and the transferring of particular subprograms and units. The FBI Program Crosswalk in Appendix III displays the current FBI program and subprogram architecture.66 Generally, we analyzed FBI program change according to its current structure.67
Analysis at the Investigative Classification Level – Besides conducting analyses of resource utilization at the program/subprogram levels, we also performed analyses down to the classification level. We computed the change in agent AOB for each classification between FYs 2000 and 2004, noting those classifications experiencing significant changes. Appendix IV shows the classifications experiencing the greatest AOB reductions and increases between FYs 2000 and 2004.
For our analyses of the FBI’s casework, we received a data run from the ACS system, and focused on cases opened from September 26, 1999, through September 18, 2004. The data run was provided in a text file and imported into a database file containing 762,350 records, separated into the following fields:
In reviewing the data, we discovered 7,183 cases in the database that were designated as having been destroyed. Of these 7,183 cases, we found 2,756 that contained opening and closing dates. We determined that we could include these 2,756 cases in any analyses involving case opening and closing dates, while we needed to eliminate those that contained no open and close dates. We therefore retained the 2,756 destroyed cases that contained open and close dates and eliminated the remaining 4,427 cases that did not. These 4,427 cases reflected one percent of the remaining database of 370,622 cases on which we performed our analyses.
We confined our casework analysis to the data we obtained from the ACS system, and did not review individual case files to determine the actual level of effort expended on any single case. Thus, if a case was open during a particular timeframe, we considered it to be worked during that period.
Case Openings – The number of cases opened in a given time period demonstrates the types of cases the FBI was investigating. In order to conduct such an evaluation, we first organized the cases according to the fiscal years in which they were opened. Then, we analyzed the difference in case openings between FYs 2000 and 2004 for FBI programs, subprograms, and investigative classifications. This analysis afforded perspective on the changes in the FBI’s level of investigative effort in different criminal areas, as well as the FBI’s overall traditional crime operations.
Case Serials – The FBI’s ACS system records each document entry into a case file as a serial. In discussion with an FBI Headquarters official, we were informed that the number of serials inputted into a case during a given time period would afford an indication of the amount of effort devoted to a case. We agreed that this analysis would provide such perspective and requested a copy of such reports of activity. We obtained a document detailing the number of serials opened for the FBI as a whole, according to particular investigative classification categories for FYs 1999 through 2004. We analyzed the number of serials opened in a given FY, specifically evaluating the difference in serial quantities for certain investigative categories between FYs 2000 and 2004.
U.S. Attorney Criminal Matters Received
We requested U.S. Attorney data for all felony categories in the 94 federal judicial districts for certain federal law enforcement components (and their task forces where appropriate).68 The components used in our analyses of criminal matters received by the USAOs are listed in Appendix V. We believe these agencies encompass the majority of the federal investigative efforts in the types of crimes under review.
In analyzing the data files provided by the EOUSA, we concluded that data involving criminal matters received by the USAOs provided the best perspective on the level of effort an investigative agency afforded a particular criminal category. Criminal matters refer to those investigative cases referred to USAOs for review and possible prosecution. A matter becomes a prosecution case once defendants are charged. Thus, the number of USAO cases would not reflect investigative effort as well as the number of criminal matter referrals. Therefore, we analyzed the number of criminal matters received in particular federal crime violation categories. We assessed the change from FY 2000 to FY 2004 for all agencies combined and for specific agencies, chiefly the FBI. We converted the original text files into a database file containing 22,130 records. The following details the field categories for the U.S. Attorney data we evaluated:
Web-Based Survey of State and Local Law Enforcement Agencies
In order to obtain a large-scale perspective on the impact that the FBI’s shift in resources has had on state and local law enforcement agencies, we conducted a web-based survey. We set the parameters of the survey to focus on state and local law enforcement agencies located in the jurisdictional area of FBI field offices, and we judgmentally selected 12 jurisdictions: Atlanta; Chicago; Dallas; Denver; Detroit; Los Angeles; Miami; New Orleans; New York City; Phoenix; San Francisco; and Washington, D.C. During the selection process, we considered three primary factors: (1) FBI field agent utilization changes in traditional crime areas, (2) FBI field office size in terms of agent FSLs, and (3) geographic location to obtain a nationwide perspective.
After identifying the jurisdictional areas, we queried an electronic directory of law enforcement agencies for determining our survey population.69 The law enforcement agencies we concentrated on were state, county, municipal, tribal, and others, such as airport and railroad police. In contrast, we excluded specialized local agencies like university campus police departments. In total, our survey population amounted to 3,514 state and local law enforcement agencies, which generally encompassed all such agencies in the 12 jurisdictional areas.
Since the electronic directory did not contain e-mail addresses, we notified our population about the survey through an initial letter and reminded them later with a postcard. Each was addressed to the chief law enforcement executive. We also followed up by calling larger departments, such as the Chicago Police Department and New York City Police Department, and encouraged their participation. State and local officials accessed the survey using a distinctive Internet address dedicated to the survey. In total, we obtained 1,265 responses from our population of 3,514 state and local law enforcement agencies, a response rate of 36 percent.70 The following table provides a breakdown of the survey respondents by location. A listing of all agencies that responded to the survey is located in Appendix X.
Conducting any survey lends itself to various types of errors related to survey responses. For example, questions might be interpreted differently by agency representatives, or agency officials might use a different basis for answering questions, such as readily available agency data or one’s own experience. In addition, respondents might not be uniformly conscientious in expressing their views or they may be influenced by concerns about how their answers might be construed by the OIG, the FBI, or the public. We incorporated various steps to limit these errors. For instance, we performed a survey beta-test with local law enforcement agencies to address differences in how questions were interpreted. We also solicited comments from the FBI’s Criminal Investigative Division and Office of Law Enforcement Coordination about the content and clarity of the survey. We modified our survey questions based upon the beta-test results and comments received from the FBI.
The survey responses were contained in a database format within the survey software program. For analysis purposes, we exported the survey database, which contained 130,593 records, to another software program to conduct our examination of the responses. Detailed results of our survey are contained in Appendix VIII.