Criminological Highlights Vol. 5, No. 1 – July 2002

View this issue as a PDF: PDF iconCrimHighlightsV5N1.pdf

This issue of Criminological Highlights addresses the following questions:

  1. Do Americans still want tough crime policies?
  2. Which types of police officers are likely to be the subject of citizen complaints?
  3. What are the risks of talking about young psychopaths?
  4. Do sexual offenders really have high reconviction rates?
  5. Is it really predictive of guilt to know that a man accused of killing his wife had been unfaithful, or was considering leaving her?
  6. Why is “broken windows” policing likely incompatible with “community policing”?
  7. Do “geographic profiling systems” contribute anything special in locating offenders?
  8. What does “proportionality” in sentencing mean?

Americans are beginning to tire of ‘tough on crime’ policies and are turning to prevention rather than prisons as a more appropriate response to crime.

Background. American politicians have successfully run election campaigns using crime as their vehicle to public office. It appears that things have changed somewhat since the peak of crime in the early part of the 1990s. Since that time, crime – particularly violent crime in the U.S. – has leveled off in many states while imprisonment rates have hit all time highs (with 2 million Americans in state or federal prisons or jails). A recent survey of public attitudes shows the following:

  • Preferred approach to crime:  ‘Tough on crime’ strategies (with an emphasis on strict sentencing, capital punishment and less parole) - 42% in favour in January 1994 versus 32% in September 2001. ‘Tough on causes of crime’ strategies (with a focus on job training, family counseling, etc.) – 48% in favour in January 1994 versus 65% in September 2001. Even Republicans are more likely to be in favour of addressing the causes of crime than simply adopting a tougher approach to crime itself.
  • Current top priority for dealing with crime: Prevention - 37%; Rehabilitation - 17%; Enforcement (such as putting more police on streets) - 19%; Longer sentences and more prisons - 20%.
  • Support for mandatory sentences: 55% (in favour) in June 1995 versus 38% (in favour) in September 2001.
  • A majority (54%) of Americans presently think that America’s approach to crime is on the wrong track. In contrast, 35% think that it is in the right direction and 11% are not sure. 
  • In particular, the war on drugs is currently seen by 70% of Americans as more of a failure than a success. Only 18% thought that it was more of a success while 9% saw good in some parts and not in others. 3% were uncertain. 
  • People presently view prisons simply as warehouses with 58% seeing attempts at rehabilitation as having been very unsuccessful or somewhat unsuccessful. Only 34% thought that they were successful while the rest (8%) were not sure.

In terms of what to do now, the picture is clear:

  • Most (76%) want mandatory treatment rather than prison time for drug possession and 71% also want treatment instead of imprisonment for selling small amounts of drugs. 
  • Alternatives to prison were favoured for youthful offenders (85% in favour) and non-violent offenders (75% in favour). Other similar programs (e.g., intermittent custody) which reduce prison sentences for non-violent offenders were also favoured by the majority of the American public. 
  • Most Americans (56%) want to get rid of mandatory minimum sentences. Again, this attitude was even true of Republicans (51%).
  • The majority of Americans favour job related rehabilitation programs such as mandatory prison labour (94%), required classes (91%) and job training for released prisoners (88%).
  • Most Americans (77%) agree that the expansion of after-school programs and other crime prevention strategies would lead to long term savings by reducing the need for prisons. An equal proportion of the American public believes that treatment programs for drug offenders would save money. 
  • The events of September 11, 2001 did not alter Americans’ views with regard to the best way of dealing with crime. 

Conclusion. “There is widespread agreement that the [American] nation’s existing approach to criminal justice is off-target” (p.6). It would seem that Americans are looking for effective ways of addressing the real problems of crime. Public opinion surveys in the past year suggest that there has been a shift from punitiveness to effectiveness. In the past, politicians appear to have led rather than followed the public toward harsh policies (see Beckett, Making Crime Pay. Oxford, 1997).  Currently, they would seem well advised to change direction if they wish to stay in step with their constituencies.

Reference: Peter D. Hart Research Associates (2002). Changing Public Attitudes Toward the Criminal Justice System.  The Open Society Institute.

Back to top

Police officers against whom complaints are most often made also tend to be the most active members on a force. However, they are more likely to use physical force as well. 

Background.  Citizen complaints against police officers (e.g., for rude or aggressive behaviour) do not appear to be randomly distributed. Some police officers are more likely to be subject to complaints than others. Three theories exist regarding the meaning or significance of police complaints. First, it is possible that complaints tell us little (if anything) about the subject (i.e. the police officer) of the complaint. The second possibility is that a police officer who is the subject of a disproportionately large number of complaints is a potential problem individual. In this case, the complaints serve as a measure of inappropriate police behaviour. Third, it is possible that complaints are an indicator of high levels of police productivity. This is to say that police officers who are subject to a high number of complaints may simply be more active and more likely to place themselves in situations in which complaints might be elicited (e.g., in arrests or proactive stops of citizens). Logically, a police officer who does nothing is unlikely to be the subject of a citizen complaint.

This study examined the behaviour of St. Petersburg, Florida police during their ordinary shifts. Observers watched 94 police officers who had been the subject of either a disproportionately high number of complaints (at least one a year) or no more than one per five years. Their use of force or discourtesy was recorded, as well as the rate at which they issued “commands” to citizens, searched them, etc.. Ordinary “encounters” with citizens who were suspects or disputants were also counted. 

The results are straightforward and lend support to both the second and third hypotheses. 

  • Officers with a high number of complaints tended to use more forceful tactics per suspect encountered than did officers with a low number of complaints.  
  • No differences existed between high and low-complaint officers in the rate at which they were discourteous to citizens. 
  • There were no differences between high and low-complaint officers regarding their use of searches or commands and threats with suspects.
  • However, officers who were the subject of a high number of complaints were more likely to interrogate suspects to determine whether they were involved in wrongdoing. As well, they were more apt to initiate encounters with citizens than were officers with a low number of complaints. 

Conclusion. This study involving the systematic observation of police officers supports two of the proposed major perspectives. In comparison with police officers with few complaints against them, those who are subject to a large number of complaints had a higher likelihood of placing themselves in situations in which complaints tend to emerge (e.g., interrogations and proactive stops). However, they are also more apt to use a greater number of forceful tactics against those citizens whom they encounter. 

Reference: Terrill, William and John McCluskey (2002). Citizen Complaints and Problem Officers. Examining Officer Behavior. Journal of Criminal Justice, 30, 143-155.

Back to top

Assessing psychopathy in youthful offenders is almost certain to result in ordinary adolescents being labelled as psychopaths.

Background.  Research on adult psychopathy has noted that these individuals often displayed antisocial behaviour as youths. Based on this finding, researchers have begun looking for ways to identify “fledgling psychopaths” (p.219). Particularly with public concern with youth crime, it is not surprising that efforts to predict violence inevitably have started to focus on “juvenile psychopathy.” This paper (as well as the commentaries that follow it) examines the dangers in such an approach.

The difficulties with such a strategy are multiple in nature. First, the relatively transient quality of behavioural patterns in normal adolescence make it likely that assessment with measures adapted from adult instruments have a high probability of identifying normal youths as psychopaths. In addition, and although some of these measures have already been developed, they have not yet been sufficiently validated, nor do they yet have published guidelines on their use. These deficiencies are problematic. For example, if the existing assessment tools are to be useful, they must measure stable traits. Yet, “there have been no published studies using the instruments… at different points in time during … childhood or adolescence” (p.232). Moreover, “no published studies have addressed whether high psychopathy scores in adolescence predict high psychopathy scores in adulthood, much less a higher risk of violent and other antisocial conduct in adulthood” (p.234). Further problems exist in interpreting any even short term predictability from these measures. Indeed, some studies have shown weak relationships between juvenile psychopathy and offending, but have not even attempted to control for other known “risk” factors such as substance abuse or ADHD.

Interestingly, supporters of efforts to measure psychopathy such as Stephen Hart at Simon Fraser University agree with the call for caution with respect to the infiltration of adolescence by the merchants of psychopathy. As Hart notes, “there is no consensus among developmental psychopathologists that a personality disorder as a general class of psychopathology even exists in childhood or adolescence… There are good reasons… to believe that personality does not crystallize until at least late adolescence or even early adulthood… If stable personality does not exist… then surely personality disorder cannot” (p.242). In addition, the limited information “used to assess juvenile psychopathy imposes a limit on the accuracy and reliability of the assessment” (p.243). Other researchers note that the concerns raised with respect to psychopathy hold for other measures of psychopathology as well (pp. 248-9).

Conclusion. Psychopathy has become popular in adult criminal justice because of the claims made concerning its predictive value. However, a set of serious “conceptual problems bedevil research in this area” (p.244) when applied to youth. Particularly since “no one knows if what appear to be traits of psychopathy in childhood or adolescence persist across even short periods of time”, it would appear that the advice of experts to beware of the psychopathy sellers would be worth heeding. 

References: Seagrave, Daniel and Thomas Grisso (2002). Adolescent Development and the Measurement of Juvenile Pathology. Law and Human Behavior, 26, 219-239. Hart, Stephen D., K.A. Watt and G.M. Vincent. Commentary on Seagrave and Grisso: Impressions of the State of the Art (pp. 241-245). Frick, Paul J. Juvenile Psychopathy from a Developmental Perspective: Implications for Construct Development and Use in Forensic Assessments (pp. 247-253). Lynam, Donald R. Fledgling Psychopathy: A View from Personality Theory (pp. 255-259).

Back to top

Sex offenders are not reconvicted at the rate that many people think they are. Parole boards over-predict re-offending for these prisoners. The notion that certain groups of sex offenders are driven to commit additional sex offences on release is challenged by this study.

Background.  “There is a widespread assumption in the mass media and probably amongst the public… that sex offenders (especially those who offend against children) are particularly prone to repeat their crimes” (p.371). However, the data on reconviction tend to challenge this assumption (see, for examples, Highlights Vol. 3, No. 3, Item 3). It is not reconviction per se that is important. Rather, it is the type of offence that is clearly of most concern.  

This study followed 174 male prisoners who had been convicted of a serious sex offence in the U.K. for at least 2 years after release and, in the case of 94 of them, for 6 years. These offenders were subsequently divided into groups (e.g., adult vs. child victim, male or female child victims, stranger or known victim, single vs. multiple victims, whether the offence against a child had taken place within the family unit). 60% of these offenders had at least one child victim, approximately one quarter of whom were male. Parole board hearings were also monitored which allowed the researchers to determine whether an offender had been described as posing a ‘high risk’ (p.373). 

The results suggest a pattern of reconviction that is lower than most definitions of ‘high risk.’ [Note, of course, that the study deals only with reconvictions. Presumably there could have been some re-offending that was not reported or in which the offender was not apprehended.]
6.7% (11) of the 162 offenders who had been in the community for at least 4 years had been reconvicted of a sexual offence. Of the 6 who had previously been convicted of an offence involving an adult, all but one were reconvicted of an offence against an adult. Four of the other 5 whose original offences involved children were reconvicted for offences against children. 
An additional 5.6% (9) were reconvicted of a (non-sexual) violent offence. 

An examination of the 94 who had been out for six years or more shows that 8.5% had been reconvicted for a sexual offence and imprisoned during this period and another 4.3% (4) were reconvicted for a violent offence and also incarcerated. A total of 18.1% (17) were imprisoned for some offence. An additional 12.8% (12) were reconvicted for some other offence but not sent to prison. In total, 30.9% were reconvicted but most of these were clearly not for sexual offences. 

Looking only at the 6-year follow-up of those who had originally offended against children, none of the 31 whose victims had been within the family were reconvicted of a sexual or violent offence and imprisoned. Of the 19 who were originally convicted for extra-familial offences against children, 6 (32%) were reconvicted for a sexual or violent offence and were incarcerated. The 6-year reconviction rates for offenders against children and offenders whose victims were exclusively adults were not dramatically different, with the exception of those whose child victim was in the family. None of these individuals were reconvicted for a violent or sexual offence. 

92% of those identified as “high risk” by a member of the parole board were not reconvicted of a sexual offence within four years. However, the parole board had also labelled all but one of the repeat sexual offenders as high risk. By “over-predicting” risk, those who re-offended as well as those who did not were identified. A statistical device used to make predictions was moderately related to reconviction, although 13 of the 22 identified as “high risk” (59%) were not reconvicted.  

Conclusion. Reconviction rates for most groups of sex offenders are lower than they are typically assumed to be. In particular, the notion that those who offend against children will, almost invariably, be reconvicted is challenged by these data. Parole board assessments of risk can be seen as “correct” because they have a high rate of seeing “high risk”. Hence, they accurately identify the repeat sexual and dangerous offenders but also identify a large number who are not, in fact, reconvicted.

Reference: Hood, Roger, Stephen Shute, Martina Feilzer, and Aidan Wilcox (2002). Sex Offenders Emerging from Long-Term Imprisonment.  British Journal of Criminology, 42, 371-394

Back to top

Intuitive profiling – the assertion that the accused fits the informal stereotype of the type of person likely to commit the crime in question – is liable to be deceptive, even though courts have deemed it to be probative evidence.

Background.  A man is charged with the killing of his wife. It is argued that the fact that he had been unfaithful to her constitutes evidence that he killed her, rather than, as argued by the defence, she died accidentally. Intuitively, the court decides that unfaithful husbands are more likely to kill their wives than faithful husbands and subsequently admits the evidence on that ground. What’s wrong with this?  Lots, it turns out.

This paper examines the issue of intuitive profiling and notes that sensible ways of evaluating this evidence exist but are rarely understood. A simple example is presented using the above-mentioned scenario. Imagine that we are looking at the behaviour of 1 million men. US data would suggest that we might expect 26% of them to be unfaithful to their wives. Independently, the maximum probability that a man would kill his wife at some point during the marriage might be estimated as 240 per million married men. If one further assumes the maximum relationship between these two variables, one would assert that all 240 married men who killed their spouses had been unfaithful (see table).  

  Faithful Unfaithful Total
Killed wife 0 240 240
Did not kill wife 740,000 259,760 999,760
Total 740,000 260,000 1,000,000

These data lead to the following outcome: the probability of murder if faithful is (hypothetically) zero.  The probability of murder if unfaithful is 240/260,000 or .09%. Said differently, 99.91% of the unfaithful men did not kill their wives. Thus, “one can conclude that at maximum it is… less than 1/10 of 1% more likely that an unfaithful man will murder his wife at some point in their marriage than it is that a faithful man will murder his wife” (p.138). 

In general, it turns out that the usefulness of a predictor is smallest when the base rate of the behaviour (the crime) is low and the base rate of the predictor is relatively high. “Unfortunately, for many (if not most) of the profiling predictors in the legal system, the base rate of the predictor far exceeds the base rate of the crime. Thus the predictor will not be probative – either at all, or sufficiently to outweigh its potential prejudicial value” (p.139). For example, it is shown that “intention to dissolve [a] marriage [on the part of a man] is not meaningfully more probative [of killing one’s wife] than infidelity” (p.144). Multiple predictors improve matters somewhat as long as they are largely unrelated and the predictor itself has a low base rate. Further, base rates themselves clearly have to be established for relevant populations. In the table above, the base rate of wife-killing varies with certain population characteristics. 

Conclusion. Information such as the unfaithfulness of a man accused of killing his wife is often admitted in court without the analysis showing that it only improves the accuracy of a judgment by 1/10 of 1%. It appears that decision makers (i.e. judges, juries, parole boards) may well be “falling prey to the tendency to assume that if an item of evidence… fits their intuitive stereotype or causal theory of those associated with a specific criminal behavior, the evidence is usefully probative of guilt” (p.150). The problem becomes more acute with multiple examples of high rate evidence. It is suggested that estimating the actual value of this evidence – as done in the example above – may reduce the prejudicial value of the evidence.

Reference: Davis, Deborah and William C. Follette (2002). Rethinking the Probative Value of Evidence: Base Rates, Intuitive Profiling, and the “Postdiction” of Behavior. Law and Human Behavior, 26, 133-158.

Back to top

The two currently ‘hot’ models of modern policing – community policing and broken-windows policing – have sometimes been described as complementary in nature. There are serious concerns about seeing them in this way. 

Background. Two paradigms have developed in the past several decades on how the police can reduce crime. “Broken windows policing” (a.k.a., order maintenance policing or quality of life policing) is based on the assertion that low level disorder breeds higher level disorder. It justifies a focus on vagrancy, beggars or prostitutes because the reduction of these problems is seen as a mechanism for preventing robbery or murder. Community policing describes the solution to crime as being a partnership between police and the community in defining both the problem and the solutions. “These philosophies for bringing about change in policing practices are frequently conflated, most notably in the popular press” (p.446). 

This paper suggests that broken windows has “emerged as the dominant applied approach” (p.446) in many areas for three reasons. First, it fits police culture. Second, it is consistent with cultural understanding of crime and deviance. Finally, it is compatible with the current political culture. Both models of policing “developed as an explicit reaction against the previously hegemonic model of policing - the professional model” - with its “dual emphases: greater aloofness from the citizenry and greater emphasis on technology” (p.447). 

In contrast, community policing and broken-windows policing each emphasize informal social control and are “putatively aimed at ensuring that communities police themselves” (p.449). “The critical question then becomes just how the community will be empowered” (p.449). In community policing, the line between the police and the citizen is blurred since they are to be seen as partners. In the same way, the distinction between law abiders and law breakers becomes less clear. In contrast, the lines between police and citizens as well as good and bad citizens are reinforced by the policing paradigm in broken-windows policing. Hence, police become responsible for reducing crime by use of their coercive powers. “The power of the police, then, is not meaningfully threatened by the broken windows approach” (p.450). The “broken windows model endorses the police-centred, aggressive vision of crime reduction that was central to the professional model, and that is potentially challenged by community policing” (p.451). 

Broken windows policing is sometimes seen as being responsible for crime reduction in the U.S. This assertion “receives little support when evaluated closely” (p.451), notwithstanding assertions about New York City. Furthermore, there is even question surrounding the purported relationship between crime and disorder (See Highlights Vol.3, No. 3, Item 1). 

Part of the popularity of broken windows is that it fits nicely with police culture. Masculinity, morality and the use of force are part of police culture and are central to this theory of policing. Broken windows encourages aggressive patrolling, frequent stops and arrests, and the view that crime is the result of an “external invasion” (p.456). Consequently, these policing practices become justifiable because they are seen as stopping crime. Broken windows policing also fits the dominant political response to crime: tough, quick, and simple. 

Conclusion. While actively and openly practising broken windows policing, many police departments “profess to be doing some form of ‘community policing’, regardless of whether they are doing anything in practice in terms of improving community relations” (p.459). By conflating the two different concepts, it is argued that “the opportunity for oversight [of police affairs by the citizenry] that community policing presented” (p.459) is diminished.

Reference: Herbert, Steve (2001). Policing the Contemporary City: Fixing Broken Windows or Shoring Up Neo-Liberalism. Theoretical Criminology, 5, 445-466.

Back to top

How good are the fancy “geographic profiling systems” such as “Dragnet” at locating the homes of serial offenders? Not bad, actually. In fact, they are just as good as 21 students from Liverpool whose total training in locating offenders consisted of only two sentences of instruction on how to locate the homes of serial offenders.

Background. Computerized geographic profiling systems are apparently attractive to law enforcement agencies. In an attempt to professionalize their use, it has been suggested that “in order for individuals to be qualified to use [one] to make geographic predictions, they should have three years experience investigating interpersonal crimes and a superior level of investigative skill” (p.110). Computer programs are not magic, of course. Someone has to decide how to use the information about the location of crimes.  There are two correlated hypotheses about the choice of location of criminal activity by those who repeatedly offend:

  • “Distance-decay” – Offenders generally “commit offences close to home” (p.112).
  •  “Circle hypothesis” – Offenders’ homes tend to be located “within a circle with its diameter defined by the distance between [their] two furthermost crimes” (p.112).

This study used spatial displays of the locations of 10 sets of serial murders. It pitted Dragnet - a computerized geographic profiling system - against 42 University of Liverpool students. Without training, Dragnet won: it was more accurate, on average, than the average of the students. Subsequently, half of the students were given training.  This instruction consisted simply of telling them the two propositions presented above (e.g., “The majority of offenders commit offences close to home.”). All students were given an opportunity to repeat the task of locating the home of the ten offenders.  [Dragnet was assumed not to need a second try.] With ‘training’, the 21 students did just as well as Dragnet. Obviously, there was some variability across students but with ‘training’, the students’ estimates generally became less variable and more accurate. 

Conclusion. Giving a set of otherwise untrained under-graduates instruction consisting of only two sentences on how to locate the home of a serial offender made them as accurate as a computerized geographic profiling system. This finding suggests that the sophistication of the knowledge that we currently have in locating the home of an offender is limited. Further, this information is easily communicated to all interested parties who can subsequently use it as effectively as can a “sophisticated” system. The results showed that when the students were far from being accurate in their estimates, the computer system was as well. The implication is that those cases that departed from the simple rules presented in the two propositions were not being discovered by students nor by the geographic profiling system.

Reference: Snook, Brent, David Canter, and Craig Bennell (2002). Predicting the Home Location of Serial Offenders: A Preliminary Comparison of the Accuracy of Human Judges with a Geographic Profiling System. Behavioral Sciences and the Law, 20, 109-118.

Back to top

Proportionality is a simple concept but deciding which factors determine the “seriousness” of an offence will require some difficult choices. 

Background.  In sentencing adults, proportionality is said to be the “fundamental principle” which determines a sentence. In the case of youth being sentenced under the Youth Criminal Justice Act, judges are required to give a sentence that is “proportional to the seriousness of the offence and the youth’s responsibility for the offence.” These statements appear straightforward. It is likely that almost everyone would agree that proportionality should be important in sentencing. However, “[t]here is consensus only in the abstract [about what proportionality means]. The principle is so nebulous that it would be misleading to assert that it provides a meaningful guide to sentencers” (p.143).  

This paper examines the arguments that have been put forth for deciding the various factors which should or should not be considered relevant in determining the seriousness of an offence. More important than the particular conclusions of the author are the issues which are raised. Indeed, more than a non-exhaustive list of aggravating factors, the development of some general principles may be most useful.

The author begins with a discussion of the work of Von Hirsch and Jareborg. These scholars have suggested that there are two dimensions to proportionality: harm and culpability. The latter principle is the easier of the two. Indeed, familiar criminal law doctrines such as intention, recklessness, negligence and excuses (such as provocation) fit nicely into culpability arguments at sentencing. In determining the former principle, they suggest a “living standard” criterion “where the gravity of criminal harm is determined by the importance that the relevant interests have for a person’s standard of life” (p.147). Von Hirsch and Jareborg focus on the means or capabilities for achieving a certain quality of life, rather than actual life quality. They suggest that there are four living standard levels: subsistence, minimum well-being, adequate well-being and enhanced well-being. Crimes which violate the most basic well-being are most serious since they threaten the ability of a person to live. Further, various interests exist that can be violated: physical integrity, material support and amenity, freedom from humiliation or degrading treatment, and privacy and autonomy. Von Hirsch and Jareborg present various ways in which particular crimes can be “scaled” on these dimensions. “Discounts” can be applied for crimes which create only risks or threats to particular interests. Clearly, one criticism is that the ranking of the various “standard levels” is arbitrarily created in part because it “appears to fit the way one ordinarily judges harms” (p.149).  

Another approach proposed by the author of this study is to return to one of the utilitarian justifications for criminal law: “some interests are important and worthy of protection because they are integral to the attainment of happiness” (p.149). Consequently, we would gauge the seriousness of an offence by the degree to which it interfered with happiness. In addition, the argument is made that the law of criminal defences provides some guidance: necessity, duress, provocation, mistake, etc., all give guidance on what we do (and do not) consider to be factors to be taken into account in determining the moral culpability of an action. For example, reckless and negligent offences are typically not as serious as deliberate ones.  

Conclusion. Clearly, it is important to decide the role that various factors (e.g., intentions, negligence, recklessness) should play in determining the seriousness of an offence. However, these elements could be incorporated into a “reduction in happiness” standard of offence seriousness by noting that intentional, etc., acts typically result in more harm. A useful “proportional” sentencing system will need to develop a coherent structure for the difficult task of assessing offence seriousness. 

Reference: Bagaric, Mirko (2000). Proportionality in Sentencing: Its Justification, Meaning, and Role. Current Issues in Criminal Justice, 12, 143-163.

Back to top

Publication Type

Volume Number

5

Issue Number

1