I believe in knowing and understanding all sides of an argument, that way a sound and balanced solution is a real possibility.
As I have been advocating for a risk-based classification system here in
as opposed to the current conviction-based system,
some readers might be surprised that I’m posting the below
article on this blog. Virginia
But I’ve always said that static assessment tools are insufficient at rating any future risks because it ONLY looks at the past and not at any progress or growth that has occurred in the last 5, 10, 15 or more years since the crime. To properly evaluate anyone a combination of both static AND dynamic factors must be part of the process plus 3 or more people should be tallying the results and making distinctions.
After reading the full article (below) I understand both sides of the issue better than before and I still believe a Risk-Based classification system in Virginia would elevate our Registry to be a smarter tool allowing State resources to be better directed towards those who are more likely to re-offend instead of treating everyone as a high-level threat when we know (based on the recidivism rates) that is NOT the case.
One-size does NOT fit all!
The status quo is failing our citizens.
Let’s face it there is no crystal-ball to predict the future, but if the State of Virginia is going to classify it’s offenders it would be smarter to do it with Static (past) AND Dynamic (current) Factors together as opposed to the Current Static-Only process that we’ve been doing for almost 20 years resulting in approximately 83% of the VSP Registry being classified as Violent and only 17% as Non-Violent. An imbalance like this is the result of one-sided thinking; it’s time to look at both sides and then make some changes.
The Promise (and Perils) of Predicting Sex Crimes, September 11, 2014
By Steven Yoder
Attorney-General Eric Holder’s August 1 speech criticizing the use of risk assessment in sentencing decisions may not lever the issue to the top of the policy agenda. But a new paper could revive the debate about the effectiveness of risk tools in evaluating the chances of recidivism among those convicted of sex crimes.
A forthcoming article in the Arizona State Law Journal argues that state criminal justice systems which use risk assessment tools may overestimate sex offenders’ likelihood of committing another crime. That message may complicate the efforts of those who advocate reform of sex offender policies. A key goal of reformers is to have states use actuarial risk assessments to classify offenders, instead of basing risk levels on their crime of conviction, as required by the 2006 federal Adam Walsh Act.
“Actuarial” risk assessments are designed to let state criminal justice systems evaluate risk as car insurance companies do. A list of factors that correlate with recidivism is used to group offenders into categories. Offenders with factors that correlate with higher reoffense rates are judged at greater risk; those with fewer factors are put in a lower tier.
Such assessments are increasingly used in release decisions for all types of offenders. A 2008 survey by the Association of Paroling Authorities International found that 32 out of 37 responding states used risk assessment tools to help determine conditions of parole or probation.
For those convicted of sex crimes, the stakes in risk assessment are high. Those identified as likely to commit a new offense appear on public state sex offender registries. In many states and cities, they’re also banned from living near schools, parks, or daycares.
Research shows that actuarial risk assessments perform better than what they replaced—relying on experts to use their experience to make unstructured clinical judgments. Studies done in the late 1990s concluded that professionals don’t perform much better than chance in predicting recidivism.
But criminal law professor Melissa Hamilton, the Law Journal article’s author, sees flaws in the leading actuarial instruments. These instruments, she writes, often look only at historical, or “static,” factors in an offender’s background, particularly their number and type of offenses.
Moreover, they don’t take into account more recent “dynamic” factors that might increase an offender’s chances of staying clear of future crime—success in a treatment program, abstention from alcohol and drugs, or a strong support network, for example.
At least two previous studies also concluded that the inability of static instruments to take into account changes in offenders’ lives weakened their validity, according to a December 2012 meta-analysis of studies on sex offender risk assessment tools in the journal Clinical Psychology Review.
(The Static-99 is the most popular static risk tool. The lead developer was unavailable to respond to requests for comment, and a member of the Static-99 Clearinghouse advisory board told The Crime Report that the board also couldn’t comment.)
on the way assessment tools are used, rather than on the tools themselves. Some
states give evaluators the option of “overriding” a tool’s score if their
professional judgment indicates that the offender should have been scored
higher. Hamilton, a former corrections and police officer, told The Crime
Report that evaluators are more likely to throw out low rather than high
scores, in part because of widely held assumptions that all sex offenders are
at high risk to recidivate. Hamilton
Her critique could complicate the efforts of those who advocate using individual risk assessments to make tiering decisions about sex offenders instead of the offense-based categories required by the Adam Walsh Act. (The Walsh Act had been implemented by only 17 states as of April.) The Association for the Treatment of Sexual Abusers, for example, has advocated that sex offender tiering decisions be made using “empirically derived risk assessments rather than the unproven offense-based categories proposed in the Adam Walsh Act.”
That may sound like a technical debate, but the implications are enormous.
Offense-based tiering results in many more of those on state sex offender registries being classed as high risk. A 2006 article in the journal Criminal Justice and Behavior, for example, found that after
adopted the Walsh Act’s offense-based tiering system, most registrants were
shifted from lower risk to higher risk tiers. Oklahoma
Researchers have found problems with the Walsh Act’s tiering system. A November 2012 report released by the Justice Department concluded that it performs poorly in predicting recidivism. The authors concluded that a system based on more empirical data needs to be developed.
’s paper provokes
questions about whether actuarial risk assessment works, it’s unclear what
would replace it. One alternative is to use newer risk assessment tools that
consider dynamic factors. The December 2012 Clinical Psychology Review
meta-analysis looked at 43 studies of 15 risk assessment tools to see which
tools most accurately predicted recidivism for adult male offenders. The
researchers concluded that the two tools found to perform best in predicting
risk both incorporated dynamic factors. But that conclusion came with a caveat:
since they’re newer, they haven’t undergone as many evaluations as the older
static tools. Hamilton
And some states use static instruments as only part of a more extensive evaluation. In
, for example,
staff conduct individual assessments of those convicted of sex crimes before
release or when an offender moves in from out of state. Arkansas
They collect data about any previous offenses that the person was convicted of or charged with, conduct face-to-face interviews, and can require the person to take a polygraph. Subjects are scored on two risk assessment instruments—the Static-99, and another that considers dynamic factors. All of that is used to determine the offender’s risk level for purposes of community notification, says Sheri Flynn, administrator of sex offender screening and risk assessment at the Arkansas Department of Correction.
“The Static-99 is an excellent instrument and it’s by far one of the best researched and well supported,” says Flynn. “But you cannot ignore in our opinion those dynamic factors. If you just look at the number of times people have been convicted, you’re missing the entire boat.”
A system like that may avoid overreliance on a single tool to perform what’s after all a perhaps- unachievable task: predicting human behavior. A February article in Federal Sentencing Reporter makes a somewhat grim observation about the future of risk assessment, observing that “it is possible that each instrument is reaching a ‘natural limit’ to predictive utility, beyond which it cannot improve.”