Lecture 3: Measuring Crime

Measuring Crime and Criminal Behavior: Lecture Overview

In last week’s readings and lecture, we examined instances in which widespread informal reports and rumors of crime were instigated by moral panics when no actual crimes had occurred. If our definitions of crime are far from universal, and if the stories we casually tell about the existence of crime can be so wildly inaccurate, how can we possibly begin to build a careful accounting of the crimes that actually do occur? In this lecture and accompanying readings for this week, we consider the daunting task of making reliable measurements of crime… and discuss a few reasons why seemingly solid crime statistics might not be so solid after all.

Before you start to review this lecture, be sure to read the second-week assignments laid out for you in our course syllabus:

  1. Read Hagan Chapter 2
  2. Read Goldstein, Joseph. 2012. “Police Reports Suggest Officers May Sometimes Portray Crimes Less Seriously.” New York Times, September 16. Accessible through the “New York Times” database at UMA Library Databases web page.
  3. Listen to Glass, Ira. 2010. “Is That a Tape Recorder in Your Pocket, or Are You Just Unhappy to See Me?”. This American Life, September 10.
  4. Read Rayman, Graham. 2012. “The NYPD Tapes Confirmed”. Village Voice, March 7.
  5. Participate in Activity #3 (Evidence-Based Crime Interventions) via the “Activities” on our course Blackboard page.

Before you start your readings, consider listening to author Frank Hagan’s brief audio podcast, which describes Chapter 2 and summarizes the essentials of some of its content.

Our lecture subjects this week are:

Moral Panics Revisited: Thinking About Bath Salts

When I taught this class a few years ago, a student asked me a reasonable follow-up question based on her own professional experience:

I have worked in law enforcement for 15 years and over the course of 15 years in service I have spent countless hours in training regarding drugs and their effects. This particular subject makes me wonder about the new scare of designer drugs (bath salts and spice). Can you point me in the direction of research of these particular drugs, I’m just curious of how severe the problem really is. I completed a class a few years ago where we discussed what to look for when someone is under the influence of bath salts and it is a major concern with jails as we are not equipped to handle the effects of this drug. Since the completion of this particular training (class), I have not encountered anyone under the influence of bath salts and prior to this class we had 1 case where an arrestee was under the influence of bath salts. I’m really interested in this, I’d like factual data, or is this the latest “moral panic”?

Is there a moral panic going around regarding bath salts, the cocktail of drugs with stimulant and other properties that apparently hit Maine last year?

To answer that question, let’s review (from last week’s video segment) the main features of a moral panic:

  • A folk devil (either one member of a low-status group or a whole low-status group) is said to be exhibiting threatening behavior
  • A consensus emerges that the supposed threat is real
  • All the hubbub raised is overblown compared to the objective threat
  • The hubbub emerges quickly and also subsides quickly

Does the concern with bath salts fit the criteria of a moral panic? Well, it certainly has been volatile, quickly emerging into public consciousness and also quickly receding about a year ago. The following are “Google Trends” patterns in search data for the phrase “bath salts” over time in the state of Maine, with a peak indicating high search volume in 2011-2012 and dropping back to nearly nothing shortly thereafter.  By contrast, searches for “heroin” and “meth have been more gradually growing over time in Maine through 2016:

Google Searches for Drug Categories of Bath Salts, Meth and Heroin in Maine, 2011-2016

and if we look at the pattern over space within the United States we can see that Southern Maine has experienced the second-highest concentration of searches about “bath salts” in the nation, surpassed only by the region of Scranton, Pennsylvania…

Google Trends 2009-2014 Bath Salts search locality


The criterion of volatility in concern is clearly satisfied for bath salts (especially compared to the gradual increase in interest in heroin and meth). But what about the other criteria? Was the 2011-2012 concern regarding bath salts disproportionate to the actual problem? Here’s where assessment gets tricky. The following shows the average number of suspected bath salts cases reported to the Northern New England Poison Center each week during the years of 2010, 2011, 2012, 2013 and 2014 (unfortunately, the Poison Center stopped sharing such statistics after 2014):

Number of suspected bath salts cases in Maine per week, 2010-2014, according to the Northern New England Poison Control Center

Would you say that a handful of reported cases of bath salts intoxication per week in Maine constitutes an epidemic? What if I changed the units of measurement to yearly cases? Would reporting the same information as “226 cases in a year in 2011” be more liable to provoke concern? What if I pointed out that the number of cases reported to the Poison Center would typically represent the most severe, troubling and out-of-control cases, that reporting to poison centers is not mandatory, and therefore that likely many more instances of bath salts use in Maine have occurred each year? Would you say this is “disproportionate” or not?

In addition, has any low-status group been targeted as responsible for the reported bath salts epidemic? I have trouble identifying any pattern of derogatory references to any group in the news coverage of bath salts, although I encourage you to review available news sources and see what you conclude. Do you find any pattern here?

While there certainly was a great deal of concern regarding bath salts in Maine, especially four to five years ago, that concern does not necessarily indicate a moral panic. The problem of bath salt use is not a slam-dunk case of a moral panic because it’s difficult to assess whether the danger is disproportionate to concern, and there’s not an immediately identifiable scapegoat.

Even though we don’t appear to have had an old-fashioned moral panic in Maine when it comes to the case of bath salts, we do see evidence of a mala prohibita process of authorities encountering a new source of harm, identifying that source of harm as a social problem, and deciding to classify that source of harm as criminal in nature. When bath salts first came to Maine, buying and ingesting the concoction was not criminal. It was, in fact, possible to openly purchase the substance in local “head shops.” State and federal criminalization in 2011 was part of a concerted effort to demonstrate public disapproval and diminish a worrisome wave of use.

Moving through Doubt to Measurement: the example of Rural Crime

Drawing inspiration from T.S. Eliot’s poem “East Coker,” the following video recorded in the early spring of this year begins where we left off at the end of our previous week’s discussion: in doubt, confronted with instances of supposed crime as hyped moral panic, wondering how it is possible to distinguish the hyped from the real. The video further explores such doubt, discussing reasons why crime may be underreported in rural areas and overreported in urban areas. When dealing with information about crime, it is almost always better to make systematic observation than to traffic in stories, but with all statistics it is important to question, question, question.

In the video above, we delve into the criminological literature to question the reality of the urban/rural crime difference in general. In a post to the Kennebec Journal in the spring of 2015, I apply those questions to the particular case of the capital city of Augusta in Maine, which I argue has possibly gotten a bum rap for its “high” crime rate:

When the real estate website Movoto declared Augusta to be the “#1 most dangerous place in Maine” this February, was that fair? An article in the Kennebec Journal listed a number of sensible complaints about this label. In this post, I’d like to develop one thread of criticism a bit further: crime rates in Augusta may appear unfairly large simply because Augusta is a city.

Movoto didn’t do any original research to come up with its ranking; it relied on the FBI’s Crime in the United States report for 2013, to which more than 18,000 law enforcement agencies contributed counts of crimes observed by police. The FBI measures crime as a rate, the number of observed crimes per 100,000 residents. The chart you see here draws from the FBI’s own data to show that the areas of Maine with the highest crime rates are apparently our state’s largest cities, including Augusta.

Population Size by Violent Crime Rate per 100,000 residents in the state of Maine. Source: Crime in the United States 2013. Augusta appears in red.

This trend does not always match other observations, however, and that’s because the FBI’s data is far from perfect. Based on interviews with young people, social scientists Jay Greene and Greg Forster conclude that young people living in cities fight, engage in petty theft, use drugs and drink alcohol no more often than students who live in suburbs. According to criminologists Barry Ruback and Kim Ménard, rape crisis centers report a higher rate of sexual assault cases in rural areas than in cities.

Yet the arrest rate for these sorts of offenses is higher in cities. Why?  Police cars more regularly patrol densely-packed areas in cities and are therefore more likely to see crime when it occurs. When young people commit crimes in suburban or rural areas, few or no police see it because those police are spread out over large areas. When young people commit crimes in cities, they get arrested.

In addition, many people commute to cities during the day to work, study or obtain social services, as Michael Shepherd points out in his article. This trend increases the number of people in cities in the hours when people are most active, and simultaneously decreases the number of people in suburban or rural areas. When some of these suburban or rural people visit cities, they commit crimes. Now, crime rates are fractions, with the number of crimes as the numerator and the number of residents as the denominator. The crimes visitors commit are assigned to cities, but those same visitors are not counted as residents of cities when calculating the rate of crime. Instead, they are counted as residents of the rural or suburban places they only sleep in.

Put it all together and crime rates will appear larger inside cities and smaller outside cities.

For reasons like these, the FBI declares it unfair to rank cities against small towns using its data.  Unfortunately, for websites desperate to attract readers and advertisers, headlines disparaging our largest cities may be just too juicy to resist.

The Language of Research: Hypotheses, Variables and Operationalizations

If systematic observation through research is important in order for us to move beyond doubt to make some firm conclusions about crime, then it is essential that we confront and master the language of research and start to work at moving from reading processed textbooks to understanding research articles. As an undergraduate student and a high school student before that, you may be familiar with textbooks as the dominant mode of written instruction. But in both the academic and professional sphere, social scientists most commonly accumulate and disseminate knowledge through research articles published in journals. Social scientists subscribe to prominent social science journals such as the American Sociological Review, Social Networks, and Criminology so that they can maintain an awareness of knowledge in their fields. When conducting their own research, they also may review the literature on a subject by searching journal articles through free services such as Google Scholar or paid services such as JSTOR and Academic Search Complete (available to you at the UMA Library Database page).

Your journey as an undergraduate student should take you from lower-level classes dominated by textbooks to upper-level classes that incorporate a significant number of professional research articles. By the time you have graduated with a Bachelor’s degree in the social sciences, you may not understand every method employed in professional research. Nevertheless, you should be able to read, understand, and usefully react to the hypotheses and variables described in a research article. Reading a research article is quite unlike reading any other form of writing. It’s a practiced skill that requires patience and repeated effort. This week’s activity is designed to put just a little bit of that practice under your belt, asking you to “identify a research study” and in that study “name the independent and dependent variables, and indicate how those variables are operationalized.” Let’s talk a little bit about independent variables, dependent variables, what operationalization means and how this all fits together into a hypothesis.

In research, a variable is simply something that can be measured and the state of which varies from measurement to measurement. The number of gray hairs on a person’s head is a variable, and its value (measured as a count of gray hairs) usually increases over time as that person ages. Researchers in criminology are typically interested in studying variation in some kind of outcome related to crime, and in trying to figure out what explains that variation. The outcome of interest studied by a criminologist (for instance, the number of violent crimes in a year by a person, or the number of violent crimes observed in a state per 100,000 population in the state) is called a dependent variable.

What makes the crime rate go higher? What makes the crime rate fall lower? An independent variable, like a dependent variable, is something that can be observed in the social world and that can take on different values. Independent variables are important because when they change, the dependent variable may change too. For instance, someone studying the crime rate may suggest that as the number of police cars present in a neighborhood goes up, the crime rate in that neighborhood goes down. “Number of police cars in a neighborhood” is an independent variable, “crime rate in a neighborhood” is a dependent variable — and the suggestion about how variation in one is associated with variation in the other is called a hypothesis. A hypothesis is a researcher’s best guess about the the pattern in the data that they will find.

Finally, in order to test a hypothesis in the real world, it is essential for a researcher to be able to precisely measure each variable. To operationalize a variable is to describe exactly how values of a variable will be measured in the real world. For instance, if the “crime rate” is a variable a researcher is interested in, that researcher will have to be say something specific about what she or he means by the words “crime rate.” What is the crime of interest, and what is a rate? “The number of murders observed in a state during a given year per 100,000 population in the state, as reported by the FBI’s Uniform Crime Reports” is an exact indication of how the researcher will measure the variable “crime rate.” This exact measurement is useful because readers can understand exactly how research has been carried out, and other researchers will know exactly how to replicate the original researcher’s work to verify any findings.

The Dark Figure of Crime: Misrepresenting Crime in America’s Cities

Before you started to read this week’s lecture, you read (or listened to) the reports of Joseph Goldstein, Ira Glass and Graham Rayman. In combination, and corroborating one another, they paint a picture of a New York City Police Department that distorts crime statistics, at once inflating arrest statistics for minor offenses to create a picture of an active police force and at the same time dampening police reports of serious crimes to create the image of a rejuvenated Big Apple that is safe and tourist-friendly. The reports by Goldstein, Glass and Rayman by and large are not indictments of New York City police officers, who put their lives on the line every day in order to keep others safe. Rather, these reports indict the leaders of the NYPD who are (if these reports are to be believed) so concerned with the popular image of New York created by crime statistics that they are willing to tamper with crime reports to preserve that popular image.

New York City is hardly the only city in the United States that’s faced a scandal regarding the misreporting of crime for public relations purposes. In the 1990s, the city of Philadelphia faced a similar problem. Reporters Michael Matza, Craig R. McCoy and Mark Fazlollah of the Philadelphia Inquirer shared the following account in 1998:

Soon, the Police Department will begin using sting tactics on its own officers, having undercover investigators pose as crime victims to see whether police report the incidents properly.

Now, the Justice Department is starting an inquiry into the fudging of crime statistics in the nation’s fifth-largest city.

Rarely if ever has there been so much pressure on Philadelphia police to present an accurate picture of crime.

“Going down with crime” – downgrading major offenses to minor ones to polish the image of commanders and police commissioners and make the city look safer – has been a reflex in police station houses for decades.

The methods and motives varied, but the result was almost always the same – to shift offenses out of the “Part I” group of major crimes tallied nationally by the FBI and watched closely by the media, the public, politicians and the headquarters brass.

The practice endured, top commanders now say, because favorable statistics made higher-ups happy and helped careers. It endured also because the department’s leaders rarely put teeth into their rhetoric about accuracy – never insisted that the numbers be right as well as rosy.

“Every commissioner told the troops that they wanted accurate coding – and none of them meant it,” said Chief Inspector Vincent R. DeBlasis, 60, a 39-year police veteran and a former chief of detectives. “It was all window dressing.”

When underreporting practices by law enforcement in Philadelphia were stopped, the reported crime rate in Philadelphia promptly rose by 9 percent.

What does this mean? Is the sharp fall in the national violent crime shown in the graph below merely an artifact of police underreporting? Let’s look at data through 2014 (reports on 2015 are due to be released some time toward the end of September 2016 — I’ll let you know when that happens):

Violent Crime Rate, 1960-2014

There are two reasons why such a conclusion would probably be an overreaction. First, the very nature of murder, considered by many to be the most serious crime, resists underreporting. Murder unfortunately creates a dead body, and these are difficult to undercount. Yet the murder rate shows a decline in recent years, a decline similar to the overall reported decline in violent crime:

Chart: United State Murder Rate per 100,000 population, 1960 to 2014

In addition, as this graph from the Bureau of Justice Statistics shows, National Crime Victimization Survey trends — of violent and property crimes reported to researchers by crime victims — have also shown a steady drop in recent years:

National Crime Victimization Survey 2013 Results, Figure 1: Sharp Declines in Violent and Property Crime

The fact that this trend from reports by crime victims matches the trend of reports by police officers leads criminologists to feel more confident that the trend is real, and not just the consequence of changes in reporting by either police officers or victims.

While there are many good reasons to question the veracity of crime reports, there are also some good reasons to suspect that, using multiple methods, some real and important trends in criminal behavior can be uncovered. As Frank Hagan declares in your course text, the use of multiple measures and multiple methods helps to ensure that an apparent trend in crime statistics reflects an actual underlying trend in crime being experienced in society.

Activity for the Week

This week, I’m asking you to click on the “Activities” link on the left-hand side of our course Blackboard page and complete the following activity:

On pp. 35-37 of Frank Hagan’s Introduction to Criminology textbook, two sources for “evidence-based research” on effective and ineffective crime reduction methods are listed:

Find a supposed crime reduction method that is identified in one of the sources. For the source you have chosen:

      1. identify the overall finding: is the method evaluated as effective, ineffective, or promising but of uncertain effectiveness?
      2. report how many studies the evaluation is based upon
      3. identify one of the research studies forming the basis of that evaluation, name the independent and dependent variables for that study, and indicate how those variables are operationalized.

To receive credit for Activity 3, log in to Blackboard and upload your work to “Activity #3: Evidence-Based Crime Intervention” in the “Activities” section of our course’s Blackboard page. Activity 3 is due by September 17.

One of the reasons I’m asking you to do this is to start immersing yourself in the material of criminological research. A good criminologist tests the claims of others — and I want you to apply that standard to me as an instructor and to Frank Hagan as a textbook author. This week’s activity work is clearly related to the “evidence-based research” section of Hagan’s textbook on pages 35-37. Unfortunately, some of the references made by Hagan are out of commission. On page 35, the reference to the Bureau of Justice Statistics website is out of date — the current link is actually www.bjs.gov. The preventingcrime.org website referenced on p.35 has been defunct for more than half a decade, and the work of that website by the University of Maryland has not been updated for more than a decade.

Fortunately, that work in producing evidence-based evaluations of crime research is preserved at another website, http://www.ncjrs.gov/works, and new evidence-based evaluations are being produced through a separate effort at http://crimesolutions.gov. I look forward to receiving and learning from your review — not of these entire websites, but of just ONE crime reduction method described on BOTH websites. If you have any questions regarding this week’s activity, please send it on to me and I’ll be glad to point you in the right direction.


Carter, Timothy J. 1982. Rural Crime: Integrating Research and Prevention. New Jersey: Rowman & Littlefield.

Donnermeyer, Joseph F. 1994. “Crime and Violence in Rural Communities.” Pp. 27-
63 in Perspectives on Violence and Substance Use in Rural America. Oakbrook, IL: North Central Regional Educational Laboratory

Greene, Jay P. and Greg Forster. 2004. “Sex, Drugs, and Delinquency in Urban and Suburban Public Schools.” Manhattan Institute for Policy Research, Education Working Paper Number 4.

Ruback, R. Barry and Kim S. Menard. 2001. “Rural-Urban Differences in Sexual Victimization and Reporting: Analyses Using UCR and Crisis Center Data.” Criminal Justice and Behavior 28(2):131-155.

Weisheit, Ralph A. and Joseph F. Donnermeyer. 2000.
“Changes and Continuity in Crime in Rural America.”
Pp. 309-359 in Criminal Justice 2000 Series, Volume 1: The Nature of Crime. Washington, DC: National Institute of Justice.

Weisheit, Ralph A., David Falcone and L. Edward Wells. 1994. “Rural Crime and Rural Policing.” Research In Action September: 1-15. Washington, DC: National Institute of Justice.

Wing, Janeena. 2009. “Unmasking Unreported Crime: Idaho Crime Victimization Survey 2008.” BJS/JRSA National Conference Proceedings.

Leave a Reply

Your email address will not be published. Required fields are marked *