Criminological Highlights Vol. 10, No. 6 – December 2009

View this issue as a PDF: PDF iconCrimHighlightsV10N6

This issue of Criminological Highlights addresses the following questions:

  1. Does placing a youth in custody for a long period of time reduce recidivism?
  2. Why might it be better to place prisoners in lower security prisons?
  3. Does it make sense to allow youths to make complex decisions about their own medical treatment but hold them less responsible than adults for their offences?
  4. How do best friends affect each other’s level of offending?
  5. What are the problems of using risk assessment as the basis of decisions in the criminal justice system?
  6. Does it matter who evaluates crime prevention programs?
  7. What is the impact on crime of an influx of immigrants?
  8. Can screening jurors for racial prejudice eliminate race-based decisions?

Serious juvenile offenders who are ordered to serve time in juvenile institutions are just as likely to reoffend as are comparable youths who remain in the community.  Furthermore, longer stays in juvenile institutions do not reduce subsequent offending. 

Although many political leaders suggest that communities would be safer if serious juvenile offenders were placed in institutions for long periods of time, they typically make such suggestions in the absence of empirical support.  Most systematic studies of the issue are much less optimistic. If long stays are not effective, then it logically follows that crime prevention policies based on the removal of youths from the community should be revisited.  This paper examines the effect of the removal of serious juvenile offenders from the community, using a sample of 921 youths in two locations in the United States.

About half of this sample of youths was placed on probation; the other half was sent to an institutional placement.  The unusual strength of this study was that 66 separate variables were used to control, statistically, the differences between those youth placed in institutions and those placed on probation. These same variables were used to control for differences between youths who received institutional placements of different lengths.  Not surprisingly, many of these variables showed differences between those placed in institutions vs. probation and between those who received long vs. short stays, underlining the importance of controlling for the differences. 

Two measures of subsequent offending were used: the re-arrest rate during a follow-up period of 48 months and the self-reported offending rate – the number of different types of offences (out of 22 serious antisocial and illegal behaviours) that the youth engaged in during the 4-year follow-up, corrected for the amount of time that the youth was actually in the community.   These two measures were, not surprisingly, moderately (r = .47), but by no means perfectly, correlated. 

Given that there were background differences between those youths placed in institutions and those who remained in the community, there were differences in subsequent offending rates for the two groups, absent of any controls.  Those placed in the community were about half as likely to be rearrested as those placed in institutions.  The more appropriate test of the impact of institutional placement, however, is one that takes into account the differences between the groups.  After controlling for the background differences between the two groups, there were no significant differences between the two groups on re-arrest rate.  Said differently, “the results show no marginal gain from placement in terms of averting future offending” (p. 722).  Similar effects were found for self-report offending. 

When looking at the effects of the length of institutional placement (taking into account the various control factors), there was, once again, “no marginal benefit, at least in terms of reducing the future rate of offending [re-arrest and self-report offending], for retaining an individual in institutional placement longer” (p. 723). 

Conclusion. This study of relatively serious young offenders suggests that a strategy of placing youths in custodial settings – and holding them there for long periods of time – is not likely to reduce future offending.   The latter finding – that the effect is unrelated to the “dose” of the “treatment” – suggests that, in this case, more is not likely to be better. 

Reference: Loughran, Thomas A., Edward P. Mulvey, Carol A. Schubert, Jeffrey Fagan, Alex R. Piquero, and Sandra H. Losoya (2009). Estimating a Dose-Response Relationship Between Length of Stay and Future Recidivism in Serious Juvenile Offenders. Criminology, 47 (3), 699-740.

Back to top

Placing prisoners in higher levels of security than necessary leads to higher rates of recidivism upon release.

There is some published evidence that assignment of prisoners to high security institutions increases the likelihood of misbehaviour and of recidivism compared to inmates with the same classification scores who are assigned to lower security institutions.  However, these studies typically suffer from the possibility that there are unmeasured factors that are responsible both for the placement in higher security levels and to higher rates of misbehaviour in prison or recidivism after release.  In contrast, this paper reports recidivism data for set of prisoners who, in effect, were assigned to either a high or a low security prison on a truly random basis, eliminating the possibility of differences in recidivism being the result of pre-existing conditions.

For a 6 month period in the late 1990s, California experimented with a new method of classifying inmates.  During this period, all prisoners were classified using two different instruments – the one that had been in use for some time and a new one. Those 561 prisoners for whom the 2 classification systems produced different results were randomly assigned to either a low or a medium-high security prison.  

These two groups of prisoners “exhibited equivalent levels of total and serious misconduct during their institutional confinement” (p. 153).   After release, however, differences emerged. The re-committal of prisoners to the California prison system between their release date and September 2006 constituted the main outcome variable.  The average risk period was 5.9 years.  Notwithstanding the random assignment, the higher security prisoners did tend to spend less time in prison.  Hence ‘time at risk of recidivism’ needed to be controlled for.  Using a ‘survival’ analysis (whether or not the inmate was still in the community and, therefore, had not been returned to prison), it was clear that the low security prisoners were likely to remain longer in the community without being returned to prison. “Prisoners who had been assigned to a [medium-high] security prison began failing at a higher rate upon release and continued to do so until about 1000 days after release from prison” (p. 151).  

Obviously, one limitation of this study is that it could only be carried out with those prisoners who received different classification results on the ‘old’ and ‘new’ classification instruments.  We do not know, therefore, if the results would generalize to all prisoners.  Unfortunately, it is impossible to know exactly why the prisoners sent to the medium-high security prisons were more likely to reoffend than were the prisoners who spent their prison time in minimum security.  Whether the risk of recidivism was influenced by peer and/or environmental factors is, therefore, not answered by this study. 

Conclusion. Given that prisoners assigned to high levels of security in prison are more likely to re-offend when released, it would appear that “By separating inmates into homogeneous risk pools, prison administrators are inadvertently increasing the likelihood that [certain] inmates will be recommitted to prisons” (p. 153).  This criminogenic effect of high security, however, does not manifest itself until after release.  Though the mechanism for the effect may not be clear, what is clear is that there may be important public safety consequences of unnecessarily restrictive prisoner classification. 

Reference: Gaes, Gerald G. and Scott D. Camp (2009).  Unintended consequences: Experimental Evidence for the Criminogenic Effect of Prison Security Level on Post-Release Recidivism. Journal of Experimental Criminology, 5, 139-162. 

Back to top

The fact that youths are capable of knowing ‘right from wrong’ and that they are capable of making informed and intelligent decisions about medical treatment does not justify treating youths as adults.  

Many jurisdictions have created different minimum age limits for different activities.  For example, youths can be sentenced to life in prison at age 14 in Canada, but cannot legally purchase cigarettes in some provinces until they are 19. The American Psychological Association argued before the U.S. Supreme Court that adolescent girls should have the right to make their own decisions on terminating an unwanted pregnancy, but also argued before the same body that youths should not be subject to the death penalty because of their immaturity.  These two positions were seen as contradictory by one of the US Supreme Court justices who voted in favour of the position that youths should be eligible for the death penalty.  This paper presents evidence that the positions are not contradictory and, in fact, reflect different forms of maturity. 

The main reason that these positions are not contradictory is rather simple: they are referring to different types of skills. It can be shown that youths are less mature than adults on dimensions that are relevant for determining culpability for criminal acts but are able to make difficult decisions involving personal and social values. 

Making basic medical decisions relates to “abilities that permit logical reasoning about moral, social, and interpersonal matters…” (p.586). Offending, however, is different. Youths are not the same as adults when such “capacities as impulse control and resistance to peer influence” (p. 586) are important in a decision.   The evidence suggests that most youths make decisions about terminating pregnancies in a manner that is carefully considered and not rushed. Typically these decisions are made with adult advice (though not necessarily advice from a parent). Crimes, however, are often a result of impulsive and unplanned decisions.  

A large multi-site study of 935 people age 10-30 examined the relationship of age to cognitive and psychosocial abilities.  Various standard measures of cognitive skills were used. In assessing psychosocial maturity, the investigators employed measures of whether respondents saw potentially dangerous or harmful activities as risky.  They also assessed sensation seeking, impulsivity, ability to resist peer influence, and future orientation.  

Between age 10 and age 15, psychosocial maturity did not increase.  Only at about age 16 did youths begin to achieve maturity on this dimension. Interestingly, increases in psychosocial maturity continued up to age 30.  A general measure of cognitive capacity, on the other hand, increased from age 10 to age 16, but then levelled off thereafter.  Studies of competence to stand trial show the same general relationship with age as other cognitive abilities. Thus one can be reasonably confident in concluding that “adolescents reach adult levels in cognitive maturity several years before they reach adult levels of psychosocial maturity” (p. 592).

Conclusion.  Given that cognitive abilities appear to be ‘adult-like’ relatively early in life (i.e., by the time a youth reaches age 16) and matters such as sensation seeking and the ability to resist the influence of peers only begin to become more adult like at age 16, it makes sense to think of these abilities as being different.  In deciding, then, whether youths in mid-to-late adolescence should be treated as adults, one has to determine what the relevant dimension is that controls the behaviour.   

Reference: Steinberg, Laurence, Elizabeth Cauffman, Jennifer Wollard, Sandra Graham, and Marie Banich (2009).  Are Adolescents Less Mature Than Adults? Minors’ Access to Abortion, the Juvenile Death Penalty, and the Alleged APA [American Psychological Association] “Flip-Flop”.  American Psychologist, 64 (7), 583-594. 

Back to top

When a youth’s best friend is more delinquent than he or she is, the youth will become more delinquent over time. But if the youth’s best friend is less delinquent, the youth will become less delinquent over time. 

There is an extensive research literature in criminology that suggests that youths whose friends are delinquent are more likely, themselves, to be delinquent, than are youths whose friends are not delinquent.   The implication is simple: if a youth is not involved in delinquency, then the youth should be kept away from delinquent youths.  But is the influence only in one direction?  This paper examines the hypothesis that the influence works, in fact, in both directions.

In a large multi-site study, youths in grades 7 to 12 in 16 schools were asked to indicate which of 13 deviant acts they engaged in. These included offences such as damage to property, thefts, selling drugs, as well as other forms of misbehaviour such as running away from home, lying to parents/guardians, and taking part in a fight involving a group of youths. They were also asked to identify their best same-sex friend.  About a year later, they were asked, again, to identify their best same-sex friend. They also completed, again, the same measure of deviance.  Because all youths in 16 schools were asked to fill out the questionnaires, the researchers had a high probability of being able to match youths with their best friends and to compare their levels of offending.  
 
Between the two measurement periods, youths’ delinquency rates tended to move in the direction of their best friend.   Overall, about a quarter of the youths did not change their overall delinquency rates during the year.  However, for those whose best friend’s delinquency was lower than their own, 28% became more delinquent and 66% became less delinquent in the interval between the two waves of data collection.  For those youths whose best friend had a higher delinquency score than they did, about equal proportions increased, stayed the same, or decreased in their delinquency scores. 

Youths whose best friends’ delinquency rates increased during the interval between the two data collections (either because the friend’s level of delinquency changed, or because the youth changed best friends) tended, themselves,  to increase their own levels of delinquency, and those whose best friends’ delinquency rates decreased tended to decrease their own level of delinquency.   Whether or not the youth changed best friends in the intervening period of time did not appear to make any difference to their changes in rates of delinquencies. 

Stability of friendship appeared to be unrelated to the similarity of the delinquency levels of the youth and his or her best friend.  In addition, there was only a weak relationship between the youth’s delinquency level and the best friend’s level. 

Conclusion.  It appears from these data that the delinquency levels of best friends are likely to become more similar over time.  At the same time, friends whose delinquency rates are different from one another are just as likely to remain friends as are youths whose delinquency levels are different.   However, just as exposure to delinquent peers is an important factor in predicting delinquency, these data remind us that having non-delinquent peers reduces subsequent delinquency.

Reference: McGloin, Jean Marie (2009).  Delinquency Balance: Revisiting Peer Influence. Criminology, 47 (2) 439-477. 

Back to top

Using “risk” as the basis of criminal justice decisions can be risky: Such decisions may turn out to be less accurate than anticipated and may undermine other important principles. 

Risk assessments have been used in criminal justice decision-making for decades.  Judges and other criminal justice decision-makers sometimes think that they can predict – using their own intuition or the ostensibly sophisticated prediction instruments developed by others – whether an individual will re-offend.  Parole authorities are often, in legislation, required to take into account the likelihood that a prisoner will re-offend.  In a similar way, “actuarial risk assessment is now promoted as best practice in child welfare…” (p. 3). 

Risk factors have now been divided into two types: static (largely factors relating to an offender’s past) and dynamic (factors subject to change).  Furthermore, in part because of the focus on dynamic factors in predictions, “criminological needs” have also become important. The growth of ‘evidence-based practice’ in predictions has encouraged reliance on a simple measure of effectiveness: does a measure predict future offending?  If the answer is “yes”, then often the investigation of the validity of an instrument ceases.  Similarly, the validity of the measure is seldom described in terms of the proportion of false positives and false negatives that result from using the scale.  

Some scales include components that do not on their own predict reoffending. The difficulty is that if individual components of the measures do not predict future offending – as is the case with some components of the LSI-R (Level of Service Inventory – Revised) scale –  one runs the very real risk of classifying an individual on the basis of factors that do not have any predictive value even if the overall measure does predict. The result could be that a person’s liberty is restricted as a result of a characteristic that has no relationship to future offending. Furthermore, when risks that are not demonstrably related to recidivism are included in overall risk measures, it is inevitable that the measures will not be effective in classifying offenders.  

Even when the best possible measures are used, there is substantial error.  In one study of the LSI-R, 42% of those classified in the highest risk category among Pennsylvania parolees did not reoffend.  An ‘improved’ version of this scale reduced the false positive error rate to 31%.  However, only 25% of those who did subsequently reoffend were identified as being high risk.  Similar findings (with high false positive and false negative rates) are easy to find in other studies.  Though the relationship between the ‘risk’ measures and ‘recidivism’ are almost always positive and ‘statistically significant’, there are inevitably high proportions of those who score as ‘high risk’ but do not reoffend. It is rare that a high proportion of recidivists are identified correctly by these scales.   In many cases, the problem is that there are large numbers of ‘moderate risk’ offenders whose recidivism is, in effect, unpredictable.   

Scale constructors in this area, remarkably, often focus on the internal consistency of the measures.  In risk assessment, however, “it is best when all risk items are totally independent of each other but each has a relatively strong relationship to the outcome measure utilized” (p. 6).  These conditions rarely occur in risk scales. 

Conclusion.  It is inevitable that there will be high proportions of those who are predicted to re-offend by these prediction scales who in fact do not subsequently offend.  Conversely, there are high numbers of those who re-offend who were not predicted to do so. Hence it is important to question whether the criminal justice system should base important decisions on perceived risk. If prediction of human behaviour is inherently flawed, perhaps we should revert to other principles – especially in the allocation of punishment. Instead of trying to use the criminal justice system to predict future offending, punishment could be allocated largely on the basis of what an offender has done, rather than what someone thinks he or she might do in the future. 
 
Reference: Baird, Christopher (2009).  A Question of Evidence: A Critique of Risk Assessment Models Used in the Justice System.  National Council on Crime and Delinquency. 

Back to top

The importance of independent evaluations of crime prevention programs is evident from the fact that programs evaluated by their developers tend to show more positive effects than evaluations carried out by independent evaluators.

In recent years, there is increasing evidence that crime prevention studies conducted by ‘developers-as-evaluators’ are much more likely to show positive effects than similar studies conducted by independent evaluators.  There are two possible reasons for these effects.  First it is possible that “the implementation quality is better in studies in which the program developer is responsible for the implementation” (p. 164).  If this were true, there would still be a serious problem: the findings would suggest that positive impacts of the program could be expected only if there is a highly motivated implementation team that was able to keep up its enthusiasm indefinitely.  The second possible explanation for the finding that programs evaluated by their developers show more positive effects is that results obtained by the developer of a program may be systematically biased (e.g., by focusing on positive effects rather than negative effects). 

Concern about the latter explanation is of course not limited to crime prevention studies. Similar concerns have been expressed in other areas (e.g., drug effectiveness studies).  But in criminology, the problems are serious, given that evaluations are often carried out by the developer of a program.  These problems need not involve blatant dishonesty.  Instead they can involve such matters as “selective reporting on positive results, ignoring problems associated with differential attrition [from the treatment programs], post-hoc definition of the analyzed dataset, inconsistent ad-hoc definitions of the dependent variables and the unwarranted use of one-tailed significance tests” [a relaxed standard for interpreting whether or not positive effects were found]” p. 167. 

Some of the programs evaluated by their developers have been declared ‘model programs.’ In the case of one parent training program, 43 studies that looked at child problem behaviour as an outcome variable showed quite strong effects. However, a program in 55 schools implemented by the commercial group that developed the program but that was evaluated independently found no positive impacts.  Similarly, a highly publicized anti-bullying program (the Olweus Bullying Prevention Program) that is distributed in the U.S. reports on its website findings from four studies in which the evaluation was conducted by the program developer.  These four studies show positive effects.  A fifth study, carried out in 12 schools in Philadelphia and evaluated by an independent evaluator shows no positive effects. 

These findings – positive effects by program developers and no positive effects when evaluated independently – need not involve data falsification. Instead, evaluators with an interest in the outcome may “pay more attention to evidence that supports the conclusion that they would like to reach [and may be] inclined to disregard information that contradicts their views” (p. 172).  In addition, of course, various inadequate design characteristics – such as inadequate comparison groups, post-hoc exclusion of outliers or other cases that tend to disconfirm the hypothesis, selective sub-group analysis – may account for the differences. 

Conclusion. Results that do not generalize to circumstances in which a program is evaluated by an independent research team clearly can have harmful consequences.  They tend to drive scarce resources into programs that may be ineffective. In addition, to the extent that inadequate evaluations lead to contradictory results, policy makers and the public could understandably become sceptical of any research.  The lesson is clear: those wanting to implement crime prevention programs must look carefully not only at the quality of the research supporting the effectiveness of a program but also at the relationship of the evaluator to the program itself.

Reference: Eisner, Manuel. (2009) No effects in independent prevention trials: Can we reject the cynical view?  Journal of Experimental Criminology, 5, 163-183. 

Back to top

Cities in the U.S. that had the highest increases in the number of new immigrants during the 1990s showed the largest decreases in violent crime during the same period. 

Immigrants in many countries are often blamed for apparently high rates of crime. Most research, on the other hand, suggests that first-generation immigrants typically tend to have lower crime rates than the average of the communities in which they settle (e.g., Criminological Highlights V8N6#5).  In recent years, immigrants have tended to settle in a wide range of locations in the U.S., unlike earlier periods when they tended to settle in ‘gateway’ cities. Hence, immigration could well be having a widespread impact on crime in the United States.

This paper looks at changes in crime rates in 103 metropolitan areas in the U.S. during the period 1994-2004. It models crime rates as a function of changes in the residential concentration of immigrants and Latinos, holding constant (statistically) a large number of other factors known to be related to crime rates (racial composition, age structure, educational attainment, family structure, etc.).  The challenge, in terms of analysis, was to use a technique that took into account the fact that, in general, violent crime decreased during this period of time, and the percent of immigrants in these metropolitan areas increased.  For that reason, relative change in violent crime rates was the focus of the study rather than absolute violent crime rates.

The results were consistent across measures of the overall violent crime rate, the robbery rate, and the aggravated assault rate.  Increases in the concentration of immigrants were associated with decreases in these indicators of violent crime. There was, however, no consistent impact of immigration on rape or homicide rates.  It could be argued that one of the explanations for a reduction in crime, generally, in the U.S. during this period was the increase in immigration.  However, “the overall role of immigration for the crime decline is modest, accounting for just over 6 percent of the observed crime drop” (p. 907).  At the same time, these effects should not be ignored: the effect, for the cities with the highest changes in immigrant populations translate into 40.5 fewer violent crimes per hundred thousand people in the general population.  

Conclusion.  It would appear that the increase in the concentration of immigrants in large U.S. cities and the decline in crime rate are likely to be causally linked.   This effect is consistent with studies elsewhere, but is at odds with popular stereotypes of immigrants being prone to committing crime.   Why this effect occurs, however, is not answered by this study.

Reference: Stowell, Jacob I., Steven R. Messner, Kelly F. McGeever, and Lawrence E. Raffalovich (2009).  Immigration and the Recent Violent Crime Drop in the United States: A Pooled Cross-Sectional Time-Series Analysis of Metropolitan Areas.  Criminology, 47 (3), 889-928.

Back to top

Asking potential jurors to reflect on how their ability to judge evidence in a case might be influenced by the race of a defendant is a more effective way of dealing with potential racial prejudice of jurors than simply asking them whether they would be affected by the race of the defendant. 

Concern about the possibility that a defendant’s race might influence decision-making by juries has led, in Canada, to jurors being asked whether their ability to judge the evidence would be influenced by the defendant’s race.  The problem with this approach is that it assumes that potential jurors are aware of how they would respond to particular situations and will screen themselves out of the jury.
 
Another approach to the potential impact of a defendant’s race is to encourage jurors to consider the possibility that racial prejudice might bias their judgement.  The result might be that this instruction would “orient them toward the process of correction rather than a simple denial of prejudice” (p. 322).   This study examines the effectiveness of the Canadian approach to filtering out prejudiced jurors – simply asking them if they would be affected by the defendant’s race. It compares this approach to one in which potential jurors are asked to consider how their judgments might be affected by the fact that the defendant was black. 

Non-Black Canadian university students were asked to respond to one of two trial scenarios: a drug trafficking case and an embezzlement case (the former being chosen because it was assumed to be a stereotypic Black crime).  For half of the respondents, the defendant was described as being Black; for the others, he was described as being White.  One group simply read the case without any initial questioning concerning possible bias (the ‘no challenge’ group). Other respondents who received the cases involving the Black defendant were asked to answer ‘yes’ or ‘no’ to a version of the standard Canadian screening question – whether their “ability to judge the case without bias, prejudice, or sympathy would be affected by the fact that the person charged is Black”(p. 323). Another group was asked a more reflective question: “How might your ability to judge the evidence in the case be affected by the fact that the defendant is Black?” This group was then asked whether their ability to judge would be affected by the defendant’s race. 

Similar proportions of people in the two cases (15%-18%) indicated that the defendant’s race would affect their judgment. However, they were not more likely to judge Black defendants guilty than were those who did not admit prejudice. 

The main dependent variable was a scale combining the verdict and the respondent’s confidence in the verdict (i.e., running from “very confident of guilt” to “very confident that the defendant is not guilty”).  There was clear evidence of an effect of race of the defendant.  In the conditions in which there was no questioning about possible bias (the ‘no challenge’ group)  those who received the cases in which the defendant was described as being Black were more likely to indicate that the defendant was guilty than were those who read cases in which the accused was described as being White.  However, respondents who were asked to reflect on how the Black defendant’s race might affect their judgements were less likely than the ‘no challenge’ group to see the Black defendant as guilty and were similar in their ratings of guilt to those who assessed the guilt of a White defendant.  The condition in which respondents were simply asked whether they might be biased did not differ from the ‘no challenge’ condition with the Black defendant.  In other words, the suggestion that the respondents reflect on how the defendant’s race might affect their judgement were significantly affected by this instruction such that the race of the defendant no longer had an effect. 

Conclusion.  The standard Canadian question asking potential jurors whether they were prejudiced against the defendant did not reduce the impact of the defendant’s race.  These effects were unaffected by the case being judged. It appears that there may be effective ways to counteract potential prejudice among jurors.  Asking potential jurors to consider how race might affect their verdicts appears to have a beneficial impact, though the exact mechanism of this effect is not known.  

Reference: Schuller, Regina A., Veronica Kazoleas, and Kerry Kawakami.  (2009). The Impact of Prejudice Screening Procedures on Racial Bias in the Courtroom.  Law and Human Behaviour, 33, 320-328.

Back to top

Publication Type

Volume Number

10

Issue Number

6