Americans have seen the convict labor----it sees the growing re-education through labor in internships, volunteerism, and job-training, job training, job training especially for ex-felons and long-term unemployed.
What we are not hearing is the connection to the Affordable Care Act with the heavy emphasis on preventative medicine with lots of mental health PHARMA and these forced labor camps. Rehabilitation camps are for not only addictions but for what is deemed bad social behavior.
Earlier posts show the increasing use of PHARMA in mental health treatments especially for youth and often these treatments are not what the citizens' want----it is made the only pathway out. Increasingly the terms of these treatments go on and on because addictions have always been known to be a life-long struggle.
'the Laogai system consists of three distinct types of reform: convict labor (Laogai), re-education through labor (Laojiao), and forced job placement (Jiuye)'.
'The PRC (People’s Republic of China) uses Laojiao to detain individuals it feels are a threat to national security or it considers unproductive. Individuals in Laojiao may be detained for up to three years. Because those in Laojiao have not committed crimes under PRC law, they are referred to as “personnel” rather than prisoners and they are not entitled to judicial procedure. Instead, individuals are sent to the Laojiao following administrative sentences dispensed by local public security forces. This vague detainment policy allows the PRC to avoid allegations that the individual’s arrest was politically motivated and to assert that they were arrested for reasons such as “not engaging in honest pursuits” or “being able-bodied but refusing to work.”'
'Beyond drug centers, Chinese authorities still have many ways to detain people without trial, rights activists said.
Police can detain sex workers, for example, under a mechanism known as "custody and education".
The terminology even appears to be interchangeable'.
The Affordable Care Act has buried in tons of policy the dismantling of citizens' protection against involuntary commitment. We kept seeing on national news these several mass killings having shooters that should have been forced into treatment yet this happens so rarely as to the total loss of protection for all US citizens against forced psychiatric commitment. Please take time to read this from the State of Maine from 2001 when
CLINTON/BUSH/OBAMA FAR-RIGHT 1% WALL STREET LIBERTARIANISM WAS MOVING FORWARD WITH THESE KINDS OF DETENTION.
Alicia CurtisBad Subjects, Issue # 58 , December 2001
For those of us who work in the mental health system, and for those who live with a mental illness, our work and our lives intersect with the legal system around a complex and delicate decision. In certain situations, people with a mental illness can be made to go into a psychiatric hospital or institution against their will. This process is called an involuntary commitment and every state has a law for it, although those laws are not well known. In the state of Maine, where I live and work, this law means that if a person has a mental illness, and if they are in imminent danger of harming themselves or someone else, they can be put into a psychiatric hospital against their will.
The police can initiate this process. They have the authority to take individuals into police custody if they have reason to believe the person has a mental illness and is at risk of substantial harm. Once the person is in custody, the police can then take them to the hospital for evaluation by a psychiatrist. If the psychiatrist believes this person to have a mental illness and to be in danger of imminent harm, the psychiatrist and another person fill out a certification of the need for involuntary commitment, which is also signed by a Justice of the Peace. This constitutes the initial involuntary commitment.
The slang term for the certification process that is used by both workers and patients in the mental health community is "blue paper." The form authorizing the involuntary commitment is blue, and the phrase is used liberally as a verb: "Gee, I hope they don't blue paper me." Slang adds a touch of drollness to a controversial act that represents a difficult decision for doctors, as well as an often very upsetting process for the person being "blue-papered," and sometimes their family. (Although sometimes the family gets upset if the person is not blue-papered.) This initial blue paper holds the ill individual in the hospital for five working days. At the end of these five days, patients may sign in to the hospital on a voluntary basis or they may be discharged. Alternatively, if the psychiatrist feels that the patient continues to be an imminent risk of harm to themselves or others, and if the patient continues to refuse to stay in the hospital on a voluntary basis, there is another, more formal process of commitment which involves a court hearing before a judge.
Many people outside of the mental health system are not necessarily aware of the law surrounding involuntary commitment. Yet most of the time those of us who work inside the mental health system take it for granted. I would like to step back from taking this law for granted and take a critical look at it.
From a civil rights perspective, involuntary commitment creates a class of people who, at the discretion of a police officer, can be taken briefly into police custody and then placed in a sort of preventive detention. Patricia Deegan is an ex-patient and activist who refers to long stays in psychiatric institutions as incarceration, a word that is chosen for its political charge but I think also speaks to the lived experience of the patient. Involuntary incarceration clearly imposes a different standard of civil "liberty" onto the mentally ill than that which is theoretically guaranteed to the rest of us. If you are not classified as mentally ill, can you be confined somewhere for something people believe you are going to do, but which you have not yet done? Is there any evidence that persons with mental illness are actually more dangerous to others than random members of the general public? These questions are starting places for looking beyond common assumptions about role that involuntary incarceration plays in the interplay between civil rights and civil protections.
For mental health professionals, the law can seem exceptionally frustrating when we are working with patients who are tormented and debilitated by their illness and unable or unwilling to receive treatment in the community. A patient who hears voices constantly, who exists in a state of fear about people she believes are trying to kill her, who does not take any medicines because she believes they are poisoned, and who is too distracted by her illness to cook meals or take showers, could probably not be committed to a hospital involuntarily. For those of us who entered a helping profession in order to help people who are suffering, this fact often feels like a tremendous failure of the system. Dr. Paul Chodoff, who has written several articles on the topic, points out that the focus of the involuntary commitment law on "imminent harm" as the main criterion for commitment, leads psychiatrists to feel frustrated that their work is aimed more at serving the police state in keeping dangerous people off the streets than in carrying out the aims of psychiatry. He argues that the involuntary commitment law should be broadened to allow commitment of those with a mental illness who need hospitalization due to the severe state of their illness, whether they are dangerous or not.
The existing law about involuntary commitment is the result of a long dialectic between an attitude of paternalism toward the mentally ill and ideals of personal freedom and civil liberty. Both the state and the profession of psychiatry have evidenced paternalism towards those with a mental illness, which contrasts with constitutional rights that were revisited in the civil rights movements of the 1960s. Involuntary commitment has also been shaped by the history of psychiatry. The perceived need for involuntary hospitalization is a result of the way that psychiatric treatment was conceptualized and practiced in the nineteenth century. It also arises out of a social contract, wherein the State and the profession of psychiatry join forces to protect the public from a group of people who are seen as both terrifying and burdensome.
The practice of involuntary commitment also arose as a result of the creation of the psychiatric institution as both the locus and the means for treatment of insanity. The institution was born of several different social forces. In early New England there had been a smattering of psychiatric hospitals, which had mostly started as single wards within general hospitals and had grown into separate buildings. Starting in 1810 there was a movement towards building psychiatric hospitals and institutions that continued to gather steam throughout the first half of the nineteenth century. This occurred in the context of overall changes in social welfare policy; there were new ideas and practices about the State being responsible for the indigent and the troubled, and other institutions for special populations such as the feeble-minded or epileptics were built at this time. The Victorian era brought a lower tolerance for disorder and deviance, and a sense of urgency about maintaining public safety and social order. At the same time there was a growing sense of idealism and excitement about the possibilities of a cure for mental illness. Whereas previous treatments for mental illness (exorcisms, bloodletting, emetics and purges) had generally not been successful and had contributed to a sense that insanity could only be subdued or confined, the new moral treatment being practiced in England promised the rehabilitation of the insane.
Maine's public psychiatric hospital was built in the early nineteenth century to illustrate these new ideas and treatments. The Maine Insane Hospital (often referred to at the time as the "Maine Insane") was built in 1840 in Augusta, Maine. Governor Dunlop's speech before the Maine Legislature in 1830, in which he advocated for the creation of such an institution, clearly expressed the rhetoric of collective social responsibility and hope for a cure: "Humanity loudly calls for appropriate means of relieving and restoring to enjoyment and usefulness"[those bereft of reason]"which means, are now not only beyond the reach of the poor and friendless, but cannot be commanded by the ordinary ability of our citizens or towns, on whom the duty of providing for their support may fall." In keeping with the ideas of moral treatment, the entire structure and experience of the hospital was designed as a kind of treatment. The buildings and grounds represented a clean and orderly environment which would restore the disordered mind to order. There was proper ventilation to dissipate the bad vapors of insanity. Besides the physical environment of the hospital, there was work, or occupational therapy. The hospital was part of a 220 acre working farm, which represented the key point of moral treatment. As the first report of the hospital to the Maine legislature in 1841 stated: "Employment of some kind is essential to the recovery of the insane. No employment is so congenial to the human constitution as agriculture."
A hospital like this was seen as the means of treatment for insanity, and so the means of getting treatment was for the mentally ill to enter the hospital. The only problem was, how was it determined who needed treatment, and thus needed to be in the hospital? Psychiatry was a fairly young profession with few standard ideas about mental illness. Was it a disease of the organs and physical body like other diseases, or a disease of the humors and spirits? This was still a matter of debate. The only available basis for diagnosis was behavior, and for a young profession in a socially repressive age, the behaviors that supposedly expressed mental illness were often the same as the behaviors that expressed social aberrance, deviance, and "immorality." Many people were hospitalized with a diagnosis of "moral insanity." This concept may have been a precursor to the current diagnosis of antisocial personality disorder, but it also included such Victorian no-nos as masturbation and extramarital sex. There were also claims that husbands brought their wives to institutions just to be rid of them.
The commitment law was created as a move toward reform. In 1874, Mrs. E.B.W. Packard, a one-woman whirlwind of a reform movement, successfully lobbied in the state of Maine for passage of a law to protect against wrongful commitment. She had been committed to an institution by her husband, a Calvinist minister, for arguing with him about Calvinist theology and feminism. Historians believe that Mrs. Packard probably did have a psychotic disorder, but that she would not need to be hospitalized according to modern standards. Before passage of the commitment law, in some states a husband could commit a wife to a psychiatric institution solely at the discretion of the superintendent of the institution. Mrs. Packard felt that a law creating a more formal process of commitment would bring order and justice to the system. In practice, though, people continued to be committed involuntarily for reasons having more to do with social control than psychiatric treatment.
Husbands ridding themselves of wives via the psychiatric institution was still enough of a problem in the 1930s that the first woman in Maine's legislature, Gail Laughlin, authorized a bill penalizing husbands for bringing false testimony in the involuntary commitment hearings of their wives. I worked with a patient who in the 1960s had been brought to the hospital by her husband. The chief complaint listed on the admitting record was: "Patient does not do her housework." I think she did actually have a recurrent depression, a symptom of which was her inability to care for herself and her home, but there was obviously a large overlap conceptually between mental illness and not functioning in a proscribed social role. There is also a large history of the forced treatment of of homosexuality as mental "illness." One gay man I know has a familiar story. He was brought, as a teenager, to a psychiatric hospital in the Midwest by his parents, when they found out he had been having gay sex. He was involuntarily committed to the institution and treated for his homosexuality. (The treatment didn't work).
Until the 1960s, the voice of paternalism asserted the need for involuntary commitment. But as African-Americans and women struggled for civil rights, there was renewed discussion and activism about civil rights for the mentally ill. Arguments for increased freedoms for the mentally ill took two paths, one somewhat fruitful and one less fruitful. Against the voice of paternalism, some people posed the radical question: Is there even such a thing as mental illness? For example, R.D. Laing made the famous argument that mental illness is a privileged state, an alternative viewpoint on the world. This argument at least challenged many assumptions of the mental health profession and caused them to be re-examined. On the other hand, the psychiatriast Dr. Thomas Szasz wrote a history of how early psychiatrists (such as Bleuler and Krepelin) created the diseases of mental illness by classifying certain behaviors that were disturbing to society in general, under the heading of a diagnosis, despite no evidence at the time of a cellular-level disease process. He viewed this process as the manufacture of disease, a sort of large-scale hoax which created and justified the social roles of psychiatrist and mental patient, and justified the practice of placing these patients against their will in a psychiatric institution. He regarded all of this as nothing more than a sanctioned form of social control. He saw a tacit contract existing between society as a whole and the class of psychiatrists, in which psychiatrists arrange to confine and control persons disturbing to society, in return for a social regard as members of the medical profession.
Neither R.D. Laing's nor Thomas Szasz' arguments ultimately changed the laws and practices of institutions and involuntary commitment as much as did the arguments based on the principles of freedom and personal liberty intrinsic to America's self-definition. These arguments hold that despite the existence of mental illness, and despite the fact that the mentally ill might benefit from treatment, personal freedom is a higher order good than treatment. This focus on ideas of civil liberty coincided with a conceptual shift in the 1970s regarding the locus and modalities of treatment of mental illness. The institution had devolved from being a type of treatment to being a type of warehouse, and the community was seen as the best healing environment. Models of community-based treatment, like the community mental health center and assertive community treatment, also known as the ACT model, were developed. This change in thinking and practice shaped the current commitment law, which is based on the idea that someone cannot be detained or confined without extremely good cause. It also limits the duration of involuntary commitment, and ensures that no one individual or stakeholder may make commitment decisions. Currently dangerousness is the standard for commitment, as dangerousness is a relatively simple standard to define. But dangerousness is also a highly convenient standard, both because the criminal justice system already confines people who have been determined to be dangerous, and because of continuing public fears about the alleged dangerousness of those with a mental illness.
This spring there was news in the New York Times that political dissidents in China were being forced into special psychiatric hospitals run by the police and given electro-shock therapy against their will. The mainstream organization of Chinese psychiatrist decried this practice. However this phenomenon makes obvious that wherever the mechanism of involuntary commitment exists, the possibility for abuse co-exists. It is still possible to distort the language and practice of psychiatry to overlap with social control. For instance a Chinese official discussed the idea of a political mania, the symptoms of which would be unreasonable suspicion, excessive and unhealthy energy directed in an obsessive manner to political organizing, despite the obvious negative social consequences of this activity. In any troubled relationship between the powerful and the less powerful, like the relationship between a repressive totalitarian government and a dissident citizen, or between parents and a gay teenager, or between husband and wife in a patriarchal society, the language and ideas of psychiatry and mental health practice are open to abuse as a form of social control. In these instances, the mechanism of involuntary commitment is also open to abuse as a way to confine those who are threatening to the social or political order.
I therefore hope to see the practice of involuntary commitment continue to evolve as a balance between civil liberty and the need to care for those who cannot adequately address their own safety. I would not wish for the end of involuntary commitment, because I still view it as a way to provide treatment to those who refuse out of the fear, hopelessness, and suspicion that a mental illness can bring, and who might not otherwise survive.
Alicia Curtis is a psychiatric social worker and a writer.
We see during the REAGAN administration the deregulation of our mental health system at the same time he was closing all the mental health facilities and sending them to the streets----he was dismantling Federal protections against involuntary commitment in 1991 sending what funding tied to MEDICAID down to local county governments to create their own system of handling the mentally ill. Republicans love this stuff until it comes home to them and indeed these loosing of mental health laws now tied to police and jailing will hit Republican voters as hard as Democrat. Just an aside for my Republican friends the Affordable CAre Act is written as a back door to taking away gun ownership rights for what is a broad definition of mental illness. This can now mean any level of depression or drunken fighting. ACA says a person found under the treatment for mental health can have his/her gun rights taken away indefinitely---and everyone knows addiction is for life. So, the Republican voters who always want to end Federal protections and bring policy locally always have the extreme wealth and power telling 99% what to do.
This discussion is not about the gun laws it is about losing citizen rights to psychiatric commitment and forced drugging.
The Lanterman-Petris-Short Act - Involuntary Commitment Act of 1967
The Lanterman-Petris-Short Act, often abbreviated LPS, concerns the involuntary civil commitment of individuals for psychiatric treatment in California. Since the passage of this involuntary commitment law, there have been significant changes in the mental health delivery system, and the law is now being interpreted in a manner that adversely impacts hospitals and their emergency departments.
Although numerous efforts have been undertaken in the last decade to make the law “work,” these efforts have failed to improve the fragmented and inconsistent application of the law and have placed additional unfunded burdens on hospitals.
The intents of the LPS Act were to end the inappropriate and indefinite commitment of mentally disordered persons; to provide prompt evaluation; to guarantee and protect public safety; to safeguard individual rights through judicial review; to provide individualized treatment, supervision and placement services; to encourage the full use of all existing agencies, professional personnel and public funds to prevent duplication of services and unnecessary expenditures and to protect mentally disordered persons from criminal acts.
In the four decades since the enactment of the original LPS Act, much has changed in how care is delivered to individuals with mental illness in California. In 1991, a major change occurred with the enactment of the Bronzan-McCorquodale Act (Chapter 89, Statutes of 1991), referred to as “realignment.” Realignment transferred financial responsibility for most of the state’s mental health care from the state to local governments. The core principle under realignment was to provide expanded discretion and flexibility to counties. From 1995 through 1998, there was also a major shift in county obligations within the Medi-Cal program. In order to provide counties more flexibility in the use of state funding, and to enable more integrated and coordinated care, the state developed a plan to consolidate the two Medi-Cal funding streams for mental health. A decision was made to “carve out” specialty mental health services from the rest of Medi-Cal managed care, making California’s Medi-Cal mental health program entirely managed by local government.
As realignment and consolidation was taking place, the number of community hospitals accepting individuals in need of involuntary LPS care (“designated” facilities) decreased dramatically. At the same time, the five state hospitals operated by the California Department of Mental Health (DMH) were accepting fewer and fewer community referrals unless the individual was committed by court action or in connection with criminal proceedings.
Finally, the passage of the federal Emergency Medical Treatment and Active Labor Act (EMTALA) in 1996 did not consider the impact of California involuntary treatment laws on hospitals and has resulted in a growing dependence on hospitals as the treatment provider of last resort, regardless of a hospital’s capacity, capability or competency to care for this population.
Current State of Affairs
Today, California’s local mental health delivery system relies on a complex and shifting patchwork of federal, state and local funds and varies dramatically from county to county and from year to year, based on the policy and the political landscape at all levels of government. In many communities, an increasing number of individuals with mental illnesses are becoming homeless or incarcerated with many others remaining untreated or under-treated. This will be exacerbated as the state attempts to meet its court-ordered obligation to relieve overcrowding in state prisons and expands coverage to individuals formally uninsured.
The enactment of the federal Wellstone-Domenici Mental Health Parity and Addiction Equity Act of 2008, the federal Patient Protection and Affordable Care Act of 2010 and the implementation of California’s Medi-Cal Section 1115 “Bridge to Recovery” Demonstration Waiver are adding to the complexity.
With each county having unique infrastructure, program design and administration, there is significant diversity in the level and types of mental health services available. For example, in California, 25 of 58 counties have no inpatient psychiatric services and 44 counties have no child/adolescent inpatient psychiatric facilities. This has led to an increased and often inappropriate dependence on hospital emergency rooms (often the only 24/7 service available) to become the default psychiatric services provider. This is occurring without regard to a hospital’s county-determined, involuntary designation status nor their ability to care for the involuntary patient population.
Hospitals in some areas of the state have seen a 400 percent increase in the past year in the number of individuals with psychiatric disorders being seen in their emergency departments. Some hospitals have been forced to admit patients with acute psychiatric needs to their medical floors while awaiting placement in a facility providing psychiatric services. This places hospitals in an untenable situation of violating both their licensing laws and the civil rights of the patient.
During the writing of the LPS Act in 1967, a locally funded and provided community mental health system was never envisioned. As a result, no legal mechanisms were established to ensure those individuals who are too ill to accept or access mental health treatment would be compelled to do so. Thus, these individuals have become the frequent users of both inpatient psychiatric services and hospital emergency rooms. They are the “revolving door” patients with short-term usage of expensive hospital services as their primary locus of treatment. Once discharged from the hospital, these individuals frequently decompensate rapidly and either end up back at the hospital or become a threat to public safety. With the reduction in involuntary acute care beds, emergency rooms and jails have become the treatment settings of last resort. Mechanisms must be developed so that these individuals can be resolutely treated in the community rather than continue to cycle through the system.
Counties are also liberally interpreting the involuntary commitment laws – the LPS Act – to meet the local infrastructure needs of law enforcement, emergency transportation providers, county mental health departments, judicial services and community treatment providers. This has led to wide variations in application of the law from county to county, from city to city and even from hospital to hospital. All too often, these interpretations are to the detriment of patients, hospitals and the staff caring for them and may not be protecting the patients’ civil liberties or providing equal and consistent protections as prescribed in law.
Given the importance of ensuring that hospitals and their emergency departments are available to those in need of life-saving treatment, we must efficiently use our limited resources. It is well documented that failing to provide adequate mental health care will lead to higher social, personal and economic costs. The criteria in California’s LPS laws must be updated to incorporate current medical science regarding mental illness; correspond more closely with the Medi-Cal definition of “medical necessity” and provide treatment before unnecessary social, criminal justice and/or medical consequences occur.
For More Information:
Vice President, Behavioral Health
The ACA was progressive posing from the start---it is a Republican think tank health policy so we know it is not public interest health care policy. As the statement below says----
'Critics point out that there are some limits to the law. One criticism is that there will not be enough mental health providers to accommodate the expected demand'.
There have never been enough mental health professionals and when Reagan closed all the public mental health clinics this became worse so we KNOW THEY ARE NOT GOING TO BRING IN REAL MENTAL HEALTH treatment. They are going to increase PHARMA and tie treatment to rehabilitation camps. Mental health treatment by labor. While US citizens are pushed to sign out for this we are already seeing the structures being built and of course Wall Street Baltimore Development and its 'labor and justice' NGOs are already touting all addicts need is labor.
Americans just watched as these few decades of dismantled oversight and accountability saw health industry fraud soar----hundreds of billions of dollars in fraud each year with Medicaid being hit hardest. So, if ACA seems to have the interests of low-income workers in mind with sending more and more funding to mental health treatment at the same time Bill Gates and his PHARMA global corporations centers on mental health PHARMA patents----AND as we see the installation of forced labor tying to rehabilitation---we see tons of Federal Medicaid funding going to these 'rehabilitation' structures that will be part of a global corporate campus.
ARE WE REALLY BELIEVING THE SAME POLS THAT DISMANTLED A STRONG PUBLIC MENTAL HEALTH SYSTEM IS NOW CONCERNED ABOUT THOSE MENTAL HEALTH ISSUES?
Affordable Care Act Expands Mental Health and Substance Use Disorder Benefits and Federal Parity Protections for Over 62 Million Americans
The Affordable Care Act builds on the Mental Health Parity and Addiction Equity Act of 2008 to extend federal parity protections to 62 million Americans. The parity law aims to ensure that when coverage for mental health and substance use conditions is provided, it is generally comparable to coverage for medical and surgical care. The Affordable Care Act (ACA) builds on the parity law by requiring coverage of mental health and substance use disorder benefits for millions of Americans in the individual and small group markets who currently lack these benefits, and expanding parity requirements to apply to millions of Americans whose coverage did not previously comply with those requirements.
Now a global corporation will be given those Medicaid mental health funds to build its own idea of what is good for employees needing REHABILITATION. WAKE UP FOLKS----GETTING SUBSIDIES WHILE USING PEOPLE AS FORCED LABOR FOR THEIR OWN GOOD. How could that lead to bad motives?
Employee Assistance Programs
As workplaces began to instate alcohol-free and drug-free policies, the U.S. Department of Labor joined in the effort to help reduce the effects of addiction in the workplace. Employee Assistance Programs (EAPs) grew out of these efforts to help employees overcome addiction problems. Prior to these efforts, employers would either terminate an employee or monitor and limit the likelihood of any potential problems caused by an employee’s condition. While this approach may have helped employees keep their jobs, it did little to address their underlying addiction problems.
An employer may house an EAP division within their human resource department or contract with an EAP vendor to provide services for employees. Since alcohol and drug addictions are classified as medical conditions, federal health laws require EAP representatives to keep all information regarding an employee’s condition confidential. If you’re considering entering a rehabilitation program and need to take a leave of absence, an EAP representative can help you make whatever work schedule arrangements are needed.
Working with an Employee Assistance Program
involves following a treatment plan agreement, which is developed during a series of counseling sessions with a representative. The counseling sessions allow EAP representatives the chance to assess your condition and make the needed referrals for treatment. A treatment plan agreement includes a timeline for completion and spells out the conditions of your work schedule, be it a modified schedule or leave of absence.
Alcohol and drug rehabilitation options can vary depending on the level of treatment needed to help you get back to a normal, productive lifestyle. An EAP rep will base this decision on the information you provide during the initial counseling sessions. The most commonly used rehabilitation options include inpatient treatment, outpatient treatment, therapy and 12-step programs (AA or NA). For some people, a combination of two or more options may be necessary.
Inpatient treatment will require you to stay in a treatment facility, most often for a minimum of 30 days. These facilities work best for someone who has a long-term addiction that requires an initial detoxification period. During this time, patients undergo therapy to help address whatever life issues contribute to the addiction. Patients also learn coping skills to help them avoid using alcohol or drugs. Inpatient stays are typically followed up by outpatient treatment. This involves checking in with treatment professionals on a regular basis and attending 12-step programs on a frequent basis.
If you’re at the early stages of an addiction, the EAP representative may refer you to an outpatient program at the start. During the early stages, the physical effects of addiction haven’t reached the point where your life is spinning out of control. As long as you’re determined to stop using, outpatient treatment involving therapy sessions and regular 12-step meetings can help you recover from an addiction.
* Treatment Trends
Many alcohol and drug rehabilitation centers report their admission and discharge rates on a national survey conducted by the Substance Abuse and Mental Health Service Administration, also known as SAMHSA. Based on survey information collected in 2008 and 2009, alcohol and drug treatment trends show:
- 9.3 percent of the US population had an addiction to alcohol or illegal drugs
- Only 11.2 percent of this group actually received treatment from a rehab facility
- 1.8 million admissions to treatment facilities were reported in 2008
- 41.4 percent of the 1.8 million admissions resulted from alcohol addiction
- 20 percent of the 1.8 million admissions resulted from heroine and opiate addictions
- 17 percent of the 1.8 million admissions resulted from marijuana addictions
As drug rehabilitation treatment can take considerable chunks of time out of a person’s daily schedule, working with an employer’s EAP allows you to coordinate your treatment schedule with your work schedule without placing your job at risk. For people who require inpatient treatment, this provision can mean the difference between keeping a job and losing a job. In some cases, outpatient treatment programs may require patients to attend therapy and 12-step meetings on a frequent basis throughout the week. Knowing your employer’s EAP program can accommodate this schedule can also provide relief from worry about losing your job.
An EAP representative can also refer you to a treatment program that offers partial hospitalization as opposed to inpatient care. Partial hospitalization is used in cases where an employee requires more treatment than an outpatient program provides. With a partial hospitalization program, patients attend therapy and 12-step meetings during the day and return home at night. This course of treatment may actually require a shortened leave of absence from work, which is something an EAP can accommodate.
REMEMBER-----GETTING A HOSPITAL BED ON TODAY'S HEALTH PLANS IS HARDER FOR FOLKS HAVING LIFE-AND-DEATH MEDICAL PROBLEMS----MENTAL HEALTH PATIENTS WERE PUSHED FROM MOST HEALTH SYSTEMS.
An employer’s EAP representative may refer you to an outpatient treatment program based on your need for treatment as well as other factors that affect your situation. As inpatient programs can be quite expensive, some employees may not be able to afford the costs involved, especially if their health insurance only covers a portion, or in some cases, none of the costs. An EAP rep may also make an outpatient program referral in cases where an employee has a supportive family environment that will ensure the employee follows through on the requirements of the program.
Ultimately, the level of structure a person needs to get addictive behaviors under control will determine whether an inpatient or outpatient treatment approach is needed. This means some people may prefer to maintain a work schedule while attending therapy and 12-step meetings, as work provides the kind of structure they need. As different situations call for different courses of treatment, the more an employer EAP representative knows about you, the more likely you’ll receive the type of treatment you need.
If you prefer to not discuss your treatment with your employer in any way, including with an EAP representative, you can opt for a private outpatient treatment program that offers a modified schedule. With these programs, you may be able to continue your normal work schedule while seeking treatment.
As Americans fight for the right of privacy surrounding health issues the Affordable Care Act has built all the mechanisms to make all health data available to employers and now the employers are building their own preventative health branches with Federal funding funneled to them as knowing best what our health care and mental health care should look like. Someone who struggles with depression for example will have no way of keeping the from an employer and these are the people who will be shuttled over to these rehabilitation forced labor camps under the guise of treatment that never ends.
This is also selective and will be used in retaliation/oppression of workers as they are now made fearful of being sent to these rehabilitation treatment labor statuses.
The percentage of people deemed sociopaths is around 5%----not coincidental to the 5% working for the !% who SHOULD BE SEEKING TREATMENT for lying, cheating, and stealing. As well, this same group is the highest in drug and alcohol use and yes, ON THE JOB. They will not be the ones tested or placed into these rehabilitation structures.
600 State Office Building
St. Paul, MN 55155
Anita Neumann, Legislative Analyst
firstname.lastname@example.org Updated: June 2010
Workplace Drug and Alcohol Testing
This information brief summarizes the provisions of Minnesota’s Drug and
Alcohol Testing in the Workplace Act.
Who is Covered
The law applies to all employers, defined as “any person or entity located or doing business in
this state and having one or more employees,” and includes the state and all political or other
governmental subdivisions. The act defines “employee” as any person, including an independent
contractor or person working for an independent contractor, who performs services for
compensation. Job applicants are also protected. A job applicant is any person who has applied
for work with an employer and anyone who has a job offer contingent upon passing a drug or
When Testing is Permitted
Drug and alcohol testing of employees and applicants is permitted only as explicitly authorized
by statute. Testing can only be done under a written drug and alcohol testing policy that meets
statutory requirements and must be conducted by an accredited or licensed testing laboratory.
Drug and alcohol testing is permitted only in the following circumstances:
House Research Department Updated: June 2010
Workplace Drug and Alcohol Testing Page 2
• Job applicant testing. If a job applicant has received a conditional job offer, the
employer may require or ask that applicant to undergo testing, as long as all
applicants who receive conditional job offers for the same position are required or
asked to undergo testing.
• Routine physicals. An employer may require employees to take a test as part of a
routine physical offered by the employer, as long as the physical takes place no more
than once a year and the employee receives at least two weeks’ written notice of the
• Random testing. An employer may require employees to submit to random testing
only if they are employed (1) in safety-sensitive positions, defined in the statute as
jobs in which an impairment caused by drug or alcohol usage would threaten the
safety or health of any person, or (2) are professional athletes and subject to a
collective bargaining agreement permitting random testing.
• Reasonable suspicion testing. An employer may require an employee to take a test if
there is a reasonable suspicion that the employee is under the influence of drugs or
alcohol; has violated the employer’s written rules on drug or alcohol use, possession,
sale, or transfer while on the job, at the job site, or while operating the employer’s
vehicle, machinery or equipment; has sustained a personal injury or caused another
employee to sustain a personal injury; has caused a work-related accident; or was
operating a vehicle or other equipment involved in a work-related accident.
• Treatment program testing. If an employer has referred an employee to a chemical
dependency treatment or evaluation program or if the employee is participating in
chemical dependency treatment under the employee’s benefit plan, the employer may
request or require the employee to submit to testing without notice during the
evaluation or treatment period and for two years after the end of any prescribed
Drug testing for employment has historically hit the low-wage jobs and the feelings of privacy have stopped corporations from taking these kinds of testings too far. As we see the labor test industry tied to preventative care has these stocks soaring-----this is all the health care over 70% of Americans are now receiving----and this Medicaid as mental health will have these tests soar as well. This has nothing to do with quality care and strong public mental health structures---this is all tied to the forced labor as rehabilitation and we will see drugs required in courses of treatment that people cannot opt out of.
As always they install this by outsourcing all these policy to corporate non-profits and small businesses while posing small business economy----but these corporate policies will be folded into the global health systems in each state and they will work as a team minus any small business or public input.
'Today, ten cents worth of chemicals are sold for $30 to as much as $100. Drug testing is a multi-billion-dollar-a-year industry'.
It's Time to End All Drug Testing
Wednesday, 12 March 2014 14:53 By The Daily Take Team, The Thom Hartmann Program | Op-Ed
(Photo: Micah Baldwin / Flickr)As the reality of legalized marijuana inches closer and closer every day, more and more Americans are rethinking our society’s attitude towards drugs.
But not the American Society of Addiction Medicine.
In a recent white paper, the organization argued that we should start expanding drug testing at schools and in the workplace.
As that paper’s author put it,
“The major need today is the wider and smarter use of the currently available drug testing technologies and practices.… Smarter drug testing means increased use of random testing rather than the more common scheduled testing, and it means testing not only urine but also other matrices such as blood, oral fluid (saliva), hair, nails, sweat and breath.”
I couldn’t disagree more.
Drug testing is counterproductive, degrading, and invasive, and it’s we put an end to it once and for all.
Although humans have used narcotics and intoxicants since the dawn of time, drug testing as know it is a relatively new phenomenon, and really took off with Nixon’s War on Drugs.
I had a friend back in the early 1970s - let’s call him Stanley - who sold drug purity testing kits out of the back of High Times magazine. It was a good business because it cost about ten cents for the drug-testing chemicals and he sold the testing kit for ten bucks plus shipping. By the 1980s, though, once the drug testing hysteria took off, he got really rich by selling his little drug-testing company for several million dollars.
The reason Stanley was able to sell his testing kits for such a big markup, of course, was that they’re hugely profitable. Today, ten cents worth of chemicals are sold for $30 to as much as $100. Drug testing is a multi-billion-dollar-a-year industry.
And it’s only gotten bigger.
According some estimates, approximately 84 percent of all American employers require pre-employment drug tests.
This is absolute insanity.
There is little proof that drug tests do anything other than make testing companies rich. That’s because as the ACLU has concluded,
“…drug tests do not measure impairment. Rather than looking for drugs, drug tests look for drug metabolites…As a result, drug tests mainly identify drug users who may have used a drug on the weekend, as they might use alcohol, and who are not under the influence of a drug while at work or when tested.”
That’s the biggest problem with drug testing. If an employee’s drug use actually affects their job performance, then their employer can and should have a discussion with them about it - and if they’re seriously impaired, get them into therapy or out of the job. Any other probing into an employee’s out of work behavior is just a violation of their basic right to privacy.
Think of it this way: there are a whole bunch of things that can affect someone’s job performance. Health issues, financial issues, spousal issues, quality of sleep, you name it. And if any one of those things becomes a problem, then an employer should work it out with his or her employee.
But if we took the principle behind drug testing to its logical conclusion, then we’d let employers install cameras in their workers’ houses to see if they getting a full night’s sleep. After all, poor sleep can impair many people worse than moderate drug use.
Of course, people would say that monitoring employees’ sleep is an insane idea. But it’s just as insane as making people pee into a cup to work at a factory.
There is maybe a case to be made that some jobs, like being a commercial airline pilot, are so dangerous that we should require drug testing for them.
But I know from years of experience as a pilot and passenger that the people who work in the airline industry are so concerned about their safety, as well as the safety of their passengers, that they will self-regulate even without the threat of getting fired after a failed drug test.
And what’s more, the work and pay schedules of some airlines - particularly the commuters, who pay their workers less than Burger King managers and have them work grinding hours - have been demonstrated to be a serious safety problem, one that’s arguably worse than any problem casual drug use could cause.
Ultimately, drug-testing gives people a false sense of security. And false positives regularly cost people time, money, and sometimes even their careers.
Most importantly, though, drug testing cuts at the core of our right to privacy. It gets us used to regularly having our privacy - including the privacy of our own bodies - invaded.
It promulgates the false meme that the Fourth Amendment is porous, when in fact it’s very clear in saying that our government has no right to mandate the inspection of your person or papers without getting a warrant first.
It also promotes the worst ideas about what it means to be both a drug user and a worker in America.
It promulgates the false meme that drug abuse should be a criminal matter, when in fact it’s a medical matter.
And it promulgates the false meme that employers are kings who can do whatever they want to their employees, when in fact employers should be treating their employees with respect.
What you do on the weekends and in the privacy of your own home is your business and your business alone, and no one should be allowed to punish you for it.
We need to end all drug testing beyond what is totally voluntary.
Let’s make America once again the “Land of the Free.”
'First, this paper may appear to paint a gloomy picture of future threats and abuses.
For academic scientists in medicine and health who have followed these few decades the growth and medicine around THE BRAIN---one sees good uses and bad. As this ethicist tells us there is a propensity for great bad. Mind control----creating the conditions where machines or drugs tell the truth and a citizen does not -----creating multiple tiered levels of class and ability from testing and analysis already happening. If it was not for the breakdown in all US public health ethics, morals, and patient rights in the pursuit of health industry profits we may think those in leadership would make these discovers for the good. People with no morals, ethics, rule of law profiting anyway they can will become DR NO.
Please take time to read this article or others on the bioethic concerns of THE BRAIN research and coming patents and treatments. The laws around this are not there as too the citizen protections.
The Neuroscience Revolution, Ethics, and the Law
- Markkula Center for Applied Ethics
- Focus Areas
- The Neuroscience Revolution, Ethics, and the Law
Henry T. Greely
"There's no art to find the mind's construction in the face;
He was a gentleman on whom I built an absolute trust."1
The lament of Duncan, King of Scotland, for the treason of the Thane of Cawdor, his trusted nobleman, echoes through time as we continue to feel the sting of not knowing the minds of those people with whom we deal. From "we have a deal" to "will you still love me tomorrow?", we continue to live in fundamental uncertainty about the minds of others. Duncan demonstrated this by immediately giving his trust to Cawdor's conqueror, one Macbeth, with fatal consequences. But at least some of this uncertainty may be about to lift, for better or for worse.
Neuroscience is rapidly increasing our knowledge of the functioning, and malfunctioning, of that intricate three-pound organ, the human brain. When science expands our understanding of something so central to human existence, these advances will necessarily cause changes in both our society and its laws. This paper seeks to forecast and explore the social and legal changes that neuroscience might bring in four areas: prediction, litigation, confidentiality and privacy, and patents. It complements the paper in this volume written by Professor Stephen Morse, which covers issues of personhood and responsibility, informed consent, the reform of existing legal doctrines, enhancement of normal brain functions, and the admissibility of neuroscience evidence.
Two notes of caution are in order. First, this paper may appear to paint a gloomy picture of future threats and abuses. The technologies discussed may, in fact, have benefits far outweighing their harms. It is the job of people looking for ethical, legal, and social consequences of new technologies to look disproportionately for troublesome consequences — or, at least, that's the convention. Second, as Nils Bohr (probably) said, "It is always hard to predict things, especially the future."2 This paper builds on experience gained in studying the ethical, legal, and social implications of human genetics over the last decade. That experience, for me and for the whole field, has included both successes and failures. In neuroscience, as in genetics, accurately envisioning the future is particularly difficult as one must foresee successfully both what changes will occur in the science and how they will affect society. I am confident about only two things concerning this paper: first, it discusses at length some things that will never happen, and, second, it ignores what will prove to be some of the most important social and legal implications of neuroscience. Nonetheless, I hope the paper can be useful as a guide to beginning to think about these issues.
Advances in neuroscience may well improve our ability to make predictions about an individual's future. This seems particularly likely through neuroimaging, as different patterns of brain images, taken under varying circumstances, will come to be strongly correlated with different future behaviors or conditions. The images may reveal the structure of the living brain, through technologies such as computer-assisted tomography (CAT) scans or magnetic resonance imaging (MRI), or they may show how different parts of the brain function, through positron emission tomography (PET) scans, single photon emission tomography (SPET) scans, or functional magnetic resonance imaging (fMRI).
Neuroscience might make many different kinds of predictions about people. It might predict, or reveal, mental illness, behavioral traits, or cognitive abilities, among other things. For the purposes of this paper, I have organized these predictive areas not by the nature of the prediction but by who might use the predictions: the health care system, the criminal justice system, schools, businesses, and parents.
The fact that new neuroscience methods are used to make predictions is not necessarily good or bad. Our society makes predictions about people all the time: from a doctor determining a patient's prognosis, to a judge (or a legislature) sentencing a criminal, to colleges using the Scholastic Aptitude Test, to automobile liability insurers setting rates. But although prediction is common, it is not always uncontroversial.
The Analogy to Genetic Predictions
The issues raised by predictions based on neuroscience are often similar to those raised by genetic predictions. Indeed, in some cases the two areas are the same — genetic analysis can powerfully predict several diseases of the brain, including Huntington disease and some cases of early-onset Alzheimer disease. Experience of genetic predictions teaches at least three important lessons.
First, a claimed ability to predict may not, in fact, exist. Many associations between genetic variations and various diseases have been claimed, only to fail the test of replication. Interestingly, many of these failures have involved two mental illnesses, schizophrenia and bipolar disorder.
Second, and more important, the strength of the predictions can vary enormously. For some genetic diseases, prediction is overwhelmingly powerful. As far as we know, the only way a person with the genetic variation that causes Huntington disease can avoid dying of that disease is to die first from something else. On the other hand, the widely heralded "breast cancer genes," BRCA 1 and BRCA 2, though they substantially increase the likelihood that a woman will be diagnosed with breast or ovarian cancer, are not close to determinative. Somewhere between 50 and 85 percent of women born with a pathogenic mutation in either of those genes will get breast cancer; 20 to 30 percent (well under half) will get ovarian cancer. Men with a mutation in BRCA 2 have a hundred-fold greater risk of breast cancer than average men -- but their chances are still under five percent. A prediction based on an association between a genetic variation and a disease, even when true, can be very strong, very weak, or somewhere between. The popular perception of genes as extremely powerful is probably a result of ascertainment bias: the diseases first found to be caused by genetic variations were very powerful — because powerful associations were the easiest to find. If, as seems likely, the same holds true for predictions from neuroscience, such predictions will need to be used very carefully.
Finally, the use of genetic predictions has proven controversial, both in medical practice and in social settings. Much of the debate about the uses of human genetics has concerned its use to predict the future health or traits of patients, insureds, employees, fetuses, or embryos. Neuroscience seems likely to raise many similar issues.
Much of health care is about prediction — predicting the outcome of a disease, predicting the results of a treatment for a disease, predicting the risk of getting a disease. When medicine, through neuroscience, genetics, or other methods, makes an accurate prediction that leads to a useful intervention, the prediction is clearly valuable. But predictions also can cause problems when they are inaccurate (or are perceived inaccurately by patients). Even if the predictions are accurate, they still have uncertain value if no useful interventions are possible. These problems may justify regulation of predictive neuroscientific medical testing.
Some predictive tests are inaccurate, either because the scientific understanding behind them is wrong or because the test is poorly performed. In other cases the test may be accurate in the sense that it gives an accurate assessment of the probability of a certain result, but any individual patient may not have the most likely outcome. In addition, patients or others may misinterpret the test results. In genetic testing, for example, a woman who tests positive for a BRCA 1 mutation may believe that a fatal breast cancer is inevitable, when, in fact, her lifetime risk of breast cancer is between 50 and 85 percent and her chance of dying from a breast cancer is roughly one-third of the risk of diagnosis. Alternatively, a woman who tests negative for the mutation may falsely believe that she has no risk for breast cancer and could stop breast self-examinations or mammograms to her harm. Even very accurate tests may not be very useful. Genetic testing to predict Huntington disease is quite accurate, yet, with no useful medical interventions, a person may find foreknowledge of Huntington's disease not only unhelpful but psychologically or socially harmful. These concerns have led to widespread calls for regulation of genetic testing.3
The same issues can easily arise through neuroscience. Neuroimaging, for example, might easily lead to predictions, with greater or lesser accuracy, of a variety of neurodegenerative diseases. Such imaging tests may be inaccurate, may present information patients find difficult to evaluate, and may provide information of dubious value and some harm. One might want to regulate some such tests along the lines proposed for genetic tests: proof that the test was effective at predicting the condition in question, assessment of the competency of those performing the tests, required informed consent so that patients appreciate the test's possible consequences, and assurance of post-test counseling to assure that patients understand the results.
The Food and Drug Administration (FDA) has statutory jurisdiction over the use of drugs, biologicals, or medical devices. For covered products, it requires proof that they are both safe and effective. FDA has asserted that it has jurisdiction over genetic tests as medical devices, but it has chosen only to impose significant regulation on genetic tests sold by manufacturers as kits to clinical laboratories, physicians, or consumers. Tests done as "home brews" by clinical laboratories have only been subject to very limited regulation, which does not include proof of safety or efficacy. Neuroscience tests might well be subject to even less FDA regulation. If the test used an existing, approved medical device, such as an MRI machine, no FDA approval of this additional use would be necessary. The test would be part of the "practice of medicine," expressly not regulated by the FDA.
The FDA also implements the Clinical Laboratory Improvement Amendments Act (CLIA), along with the Center for Disease Prevention and Control and the Center for Medicare and Medicaid Services. CLIA sets standards for the training and working conditions of clinical laboratory personnel and requires periodic testing of laboratories' proficiency at different tests. Unless the tests were done in a clinical laboratory, through, for example, pathological examination of brain tissue samples or analysis of chemicals from the brain, neuroscience testing would also seem to avoid regulation under CLIA.
At present, neuroscience-based testing, particularly through neuroimaging using existing (approved) devices seems to be entirely unregulated except, to a very limited extent, by malpractice law. One important policy question should be whether to regulate such tests, through government action or by professional self-regulation.
The criminal justice system makes predictions about individuals' future behavior in sentencing, parole, and other decisions, such as civil commitment for sex offenders.4 The trend in recent years has been to limit the discretion of judges and parole boards to use predictions by setting stronger sentencing guidelines or mandatory sentences. Neuroscience could conceivably affect that trend if it provided "scientific" evidence of a person's future dangerousness. Such evidence might be used to increase sentencing discretion - or it might provide yet another way to limit such discretion.5
One can imagine neuroscience tests that show a convicted defendant was particularly likely to commit dangerous future crimes by showing that he has, for example, poor control over his anger, his aggressiveness, or his sexual urges. This kind of evidence has been used in the past; neuroscience may come up with ways that either are more accurate or that appear more accurate (or more impressive). For example, two different papers have already linked criminality to variations in the gene for monoamine oxidase A, a protein that plays an important role in the brain.6 Genetic tests may seem more scientific and more impressive to a judge, jury, or parole board than a psychologist's report. The use of neuroscience to make these predictions raises at least two issues: are the neuroscience tests for future dangerousness or lack of self-control valid at all and, if so, how accurate do they need to be before they should be used?
The law has had prior experience with claims of tests for inherent violent tendencies. The XYY syndrome was widely discussed and , accepted, , in the literature though not by the courts7, in the late 1960s and early 1970s. Men born with an additional copy of the Y chromosome were said to be much more likely to become violent criminals. Further research revealed, about a decade later, that XYY men were somewhat more likely to have low intelligence and to have long arrest records, typically for petty or property offenses. They did not have any higher than average predisposition to violence.
If, unlike XYY syndrome, a tested condition were shown reliably to predict future dangerousness or lack of control, the question would then become how accurate the test must be in order for it to be used. A test of dangerousness or lack of control that was only slightly better than flipping coins should not be given much weight; a perfect test could be. At what accuracy level should the line be set?
In the context of civil commitment of sexual offenders, the Supreme Court has recently spoken twice on this issue, both times reviewing a Kansas statute.8 The Kansas act authorizes civil commitment of a "sexually violent predator," defined as "any person who has been convicted of or charged with a sexually violent offense and who suffers from a mental abnormality or personality disorder which makes the person likely to engage in repeat acts of sexual violence."9 In Kansas v. Hendricks, the Court held the Act constitutional against a substantive due process claim because it required, in addition to proof of dangerousness, proof of the defendant's lack of control. "This admitted lack of volitional control, coupled with a prediction of future dangerousness, adequately distinguishes Hendricks from other dangerous persons who are perhaps more properly dealt with exclusively through criminal proceedings."10 Id. at 360. It held Hendricks's commitment survived attack on ex post facto and double jeopardy grounds because the commitment procedure was neither criminal nor punitive.11
Five years later, the Court revisited this statute in Kansas v. Crane.12
It held that the Kansas statute could only be applied constitutionally if there were a determination of the defendant's lack of control and not just proof of the existence of a relevant "mental abnormality or personality disorder":
It is enough to say that there must be proof of serious difficulty in controlling behavior. And this, when viewed in light of such features of the case as the nature of the psychiatric diagnosis, and the severity of the mental abnormality itself, must be sufficient to distinguish the dangerous sexual offender whose serious mental illness, abnormality, or disorder subjects him to civil commitment from the dangerous but typical recidivist convicted in an ordinary criminal case.13
We know then that, at least in civil commitment cases related to prior sexually violent criminal offenses, proof that the particular defendant had limited power to control his actions is constitutionally necessary. There is no requirement that this evidence, or proof adduced in sentencing or parole hearings, convince the trier of fact beyond a reasonable doubt. The Court gives no indication of how strong that evidence must be or how its scientific basis would be established. Would any evidence that passed Daubert or Frye hearings be sufficient for civil commitment (or for enhancing sentencing or denying parole) or would some higher standard be required?
It is also interesting to speculate on how evidence of the accuracy of such tests would be collected. It is unlikely that a state or federal criminal justice system would allow a randomized double-blind trial, performing the neuroscientific dangerousness or volition tests on all convicted defendants at the time of their conviction and then releasing them to see which ones would commit future crimes. That judges, parole boards, or legislatures would insist on rigorous scientific proof of connections between neuroscience evidence and future mental states seems doubtful.
Schools commonly use predictions of individual cognitive abilities. Undergraduate and graduate admissions are powerfully influenced by applicants' scores on an alphabet's worth of tests: ACT, SAT, LSAT, MCAT, and GRE among others. Even those tests, such as the MCAT, that claim to test knowledge rather than aptitude use the applicant's tested knowledge as a predictor of her ability to function well in school, either because she has that background knowledge or because her acquisition of the knowledge demonstrates her abilities. American primary and secondary education uses aptitude tests less frequently, although some tracking does go on. And almost all of those schools use grading (after a certain level), which others can use to make predictions within the school or by others — such as other schools, employers, and parents.
It is conceivable that neuroscience could provide other methods of testing ability or aptitude. Of course, the standard questions of the accuracy of those tests would apply. Tests that are highly inaccurate usually should not be used. But even assuming the tests are accurate, they would raise concerns. Those tests might be used only positively, as Dr. Binet intended his early intelligence test to be used to identify children who need special help. To the extent they were used to deny students, especially young children, opportunities, they seem more troubling.
It is not clear why a society that uses aptitude tests so commonly for admission into elite schools should worry about their neuroscience equivalents. The SAT and other similar aptitude tests claim that student preparation or effort will not substantially affect student results, just as, presumably, preparation (at least in the short term) seems at least as unlikely to alter neuroscience tests of aptitude. The existing aptitude tests, though widely used, remain controversial. Neuroscience tests, particularly if given and acted upon at an early age, are likely to exacerbate the discomfort we already feel with predictive uses of aptitude tests in education.
Perhaps the most discussed social issue in human genetics has been the possible use — or abuse — of genetic data by businesses, particularly insurers and employers. Most, but not all, commentators have favored restrictions on the use of genetic information by health insurers and employers.14 And legislators have largely agreed. Over 45 states and, to some extent, the federal government restrict the use of genetic information in health insurance. Eleven states impose limits on the use of genetic information by life insurers, but those constraints are typically weak. About 30 states limit employer-ordered genetic testing or the use of genetic information in employment decisions, as does, to some very unclear extent, the federal government through the Americans with Disabilities Act.15 And 2004 may well mark the year when broad federal legislation against "genetic discrimination" is finally passed.16
Should similar legislation be passed to protect people against "neuroscience" discrimination?
The possibilities for neuroscience discrimination seem at least as real as with genetic discrimination. A predictive test showing that a person has a high likelihood of developing schizophrenia, bipolar disorder, early-onset Alzheimer disease, early-onset Parkinson disease, or Huntington disease could certainly provide insurers or employers with an incentive to avoid that person. To the extent one believes that health coverage should be universal or that employment should be denied or terminated only for good cause, banning "neuroscientific discrimination" might be justified as an incremental step toward this good end. Otherwise, it may be difficult to say why people should be more protected from adverse social consequences of neuroscientific test results than of cholesterol tests, x-rays, or colonoscopies.
Special protection for genetic tests has been urged on the ground that genes are more fundamental, more deterministic, and less the result of personal actions or chance than other influences on health. Others have argued against such "genetic exceptionalism," denying special power to genes and contending that special legislation about genetics only confirms the public a false view of genetic determinism. Still others, including me, have argued that the public's particularly strong fear of genetic test results, even though exaggerated, justifies regulation in order to gain concrete benefits from reducing that fear. The same arguments could be played out with respect to predictive neuroscience tests. Although this is an open empirical question, it does seem likely that the public's perception of the fundamental or deterministic nature of genes does not exist with respect to neuroscience.
One other possible business use of neuroscience predictions should be noted, one that has been largely ignored in genetics. Neuroscience might be used in marketing. Firms might use neuroscience techniques on test subjects to enhance the appeal of their products or the effectiveness of their advertising. Individuals or focus groups could, in the future, be examined under fMRI. At least one firm, Brighthouse Institute for Thought Sciences, has embraced this technology, and, in a press release from 2002, "announced its intentions of revolutionizing the marketing industry."17
More alarmingly, if neuro-monitoring devices were perfected that could study a person's mental function without his knowledge, information to predict a consumer's preferences might be collected for marketing purposes. Privacy regulation seems appropriate for the undisclosed monitoring in the latter example. Regulating the former seems less likely, although it might prove attractive if such neuroscience-enhanced market research proved too effective an aid to selling.
The prenatal use of genetic tests to predict the future characteristics of fetuses, embryos, or as-yet unconceived offspring is one of the most controversial and interesting issues in human genetics. Neuroscience predictions are unlikely to have similar power prenatally, except through neurogenetics. It is possible that neuroimaging or other non-genetic neuroscience tests might be performed on a fetus during pregnancy. Structural MRI has been used as early as about 24 weeks to look for major brain malformations, following up on earlier suspicious sonograms. At this point, no one appears to have done fMRI on the brain of a fetus; the classic method of stimulating the subject and watching which brain regions react would be challenging in utero, though not necessarily impossible. In any event, fetal neuroimaging seems likely to give meaningful results only for serious brain problems and even then at fairly late stage of fetal development so that the most plausible intervention, abortion, would be rarely used and only in the most extreme cases.18
Parents, however, like schools, might make use of predictive neuroscience tests during childhood to help plan, guide, or control their children's lives. Of course, parents already try to guide their children's lives, based on everything from good data to wishful thinking about a child's abilities. Would neuroscience change anything? It might be argued that parents would take neuroscience testing more seriously than other evidence of a child's abilities because of its scientific nature, and thus perhaps exaggerate its accuracy. More fundamentally, it could be argued that, even if the test predictions were powerfully accurate, too extreme parental control over a child's life is a bad thing. From this perspective, any procedures that are likely to add strength to parents' desire or ability to exercise that control should be discouraged. On the other hand, society vests parents with enormous control over their children's upbringing, intervening only in strong cases of abuse. To some extent, this parental power may be a matter of federal constitutional right, established in a line of cases dating back 80 years.19
This issue is perhaps too difficult to be tackled. It is worth noting, though, that government regulation is not the only way to approach it. Professional self-regulation, insurance coverage policies, and parental education might all be methods to discourage any perceived overuse of children's neuroscience tests by their parents.
II. LITIGATION USES
Predictions may themselves be relevant in some litigation, particularly the criminal cases discussed above, but other, non-predictive uses of neurosciences might also become central to litigated cases. Neuroscience might be able to provide relevant, and possibly determinative, evidence of a witness's mental state at the time of testimony, ways of eliciting or evaluating a witness's memories, or other evidence relevant to a litigant's claims. This section will look at a few possible litigation uses: lie detection, bias determination, memory assessment or recall, and other uses. Whether any of these uses is scientifically possible remains to be seen. It is also worth noting that the extent of the use of any of these methods will also depend on their cost and intrusiveness. A method of, for example, truth determination that required an intravenous infusion or examination inside a full scale MRI machine would be used much less than a simple and portable headset.
The implications of any of these technologies for litigation seem to depend largely on four evidentiary issues. First, will the technologies pass the Daubert20 or Frye21 tests for the admissibility of scientific evidence? (I leave questions of Daubert and Frye entirely to Professor Morse.) Second, if they are held sufficiently scientifically reliable to pass Daubert or Frye, are there other reasons to forbid or to compel the admissibility of the results of such technologies when used voluntarily by a witness? Third, would the refusal — or the agreement — of a witness to use one of these technologies itself be admissible in evidence? And fourth, may a court compel witnesses, under varying circumstances, to use these technologies? The answers to these questions will vary with the setting (especially criminal or civil), with the technology, and with other circumstances of the case, but they provide a useful framework for analysis.
Detecting Lies or Compelling Truth
The concept behind current polygraph machines dates back to the early 20th century.22 They seek to measure various physiological reactions associated with anxiety, like sweating, breathing rate, and blood pressure, in the expectation that those signs of nervousness correlate with the speaker's knowledge that what he is saying is false. American courts have generally, but not universally, rejected them, although they are commonly used by the federal government for various security clearances and investigations.23 It has been estimated that their accuracy is about 85 to 90 percent.24
Now imagine that neuroscience leads to new ways to determine whether or not a witness is telling a lie or even to compel a witness to tell the truth. A brain imaging device might, for example, be able to detect patterns or locations of brain activity known from experiments to be highly correlated with the subject's consciousness of falsehood. (I will refer to this as "lie detection.") Alternatively, drugs or other stimuli might be administered that made it impossible for a witness to do anything but tell the truth — an effective truth serum. (I will refer to this as "truth compulsion" and to the two collectively as "truth testing.") Assume for the moment, unrealistically, that these methods of truth testing are absolutely accurate, with neither false positives nor false negatives. How would, and should, courts treat the results of such truth testing? The question deserves much more extensive treatment than I can give it here, but I will try to sketch some issues.
Consider first the non-scientific issues of admissibility. One argument against admissibility was made by four justices of the Supreme Court in United States v. Scheffer25, a case involving a blanket ban on the admissibility of polygraph evidence. Scheffer, a enlisted man in the Air Force working with military police as an informant in drug investigations, wanted to introduce the results of a polygraph examination at his court-martial for illegal drug use.26 The polygraph examination, performed by the military as a routine part of his work as an informant, showed that he denied illegal drug use during the same period that a urine test detected the presence of methamphetamine.27 Military Rule of Evidence 707, promulgated by President George H.W. Bush in 1991, provides that "Notwithstanding any other provision of law, the results of a polygraph examination, the opinion of a polygraph examiner, or any reference to an offer to take, failure to take, or taking of a polygraph examination, shall not be admitted into evidence."
The court-martial refused to admit Scheffer's evidence on the basis of Rule 707. His conviction was overturned by the Court of Appeals for the Armed Forces, which held that this per se exclusion of all polygraph evidence violated the Sixth Amendment.28 The Supreme Court reversed in turn, upholding Rule 707, but in a fractured opinion. Justice Thomas wrote the opinion announcing the decision of the Court and finding the rule constitutional on three grounds: continued question about the reliability of polygraph evidence, the need to "preserve the jury's core function of making credibility determinations in criminal trials," and the avoidance of collateral litigation.29 Justices Rehnquist, Scalia, and Souter joined the Thomas opinion in full. Justice Kennedy, joined by Justices O'Connor, Ginsburg, and Breyer, concurred in the section of the Thomas opinion based on reliability of polygraph evidence. Those four justices did not agree with the other two grounds.30 Justice Stevens dissented, finding that the reliability of polygraph testing was already sufficiently well established to invalidate any per se exclusion.31
Our hypothesized perfect truth testing methods would not run afoul of the reliability issue. Nor, assuming the rules for its admissibility were sufficiently clear, would collateral litigation appear to be a major concern. It would seem, however, even more than the polygraph, to evoke the concerns of four justices about invading the sphere of the jury even when the witness had agreed to the use. Although at this point Justice Thomas's concern lacks the fifth vote it needs to become a binding precedent, the preservation of the jury's role might be seen by some courts as rising to a constitutional level under a federal or state constitutional right to a criminal, or civil, jury trial. It could certainly be used as a policy argument against allowing such evidence and, as an underlying concern of the judiciary, it might influence judicial findings under Daubert or Frye about the reliability of the methods.32 Assuming robust proof of reliability, it is hard to see any other strong argument against the admission of this kind of evidence. (Whether Justice Thomas's rationale, either as a constitutional or a policy matter, would apply to non-jury trials seems more doubtful.)
On the other hand, some defendants might have strong arguments for the admission of such evidence, at least in criminal cases. Courts have found in the Sixth Amendment, perhaps in combination with the Fifth Amendment, a constitutional right for criminal defendants to present evidence in their own defense. Scheffer made this very claim, that Rule 707, in the context of his case, violated his constitutional right to present a defense. The Supreme Court has two lines of cases dealing with this right. In Chambers v. Mississippi, the Court resolved the defendant's claim by balancing the importance of the evidence to the defendant's case with the reliability of the evidence.33 In Rock v. Arkansas, a criminal defendant alleged that she could remember the events only after having her memory "hypnotically refreshed."34 The Court struck down Arkansas's per se rule against hypnotically refreshed testimony on the ground that the rule, as a per se rule, was arbitrary and therefore violated the Sixth Amendment's rights to present a defense and to testify in her own defense. The Rock opinion also stressed that the Arkansas rule prevented the defendant from telling her own story in any meaningful way. That might argue in favor of the admissibility of a criminal defendant's own testimony, under truth compulsion, as opposed to an examiner giving his expert opinion about the truthfulness of the witness's statements based on the truth detector results. These constitutional arguments for the admission of such evidence would not seem to arise with the prosecution's case or with either the plaintiff's or defendant's case in a civil matter (unless some state constitutional provisions were relevant).35
Assuming "truth tested" testimony were admissible, should either a party's, or a witness's, offer or refusal to undergo truth testing be admissible in evidence as relevant to their honesty? Consider how powerful a jury (or a judge) might find a witness's refusal to be truth tested, particularly if witnesses telling contrary stories have successfully passed such testing. Such a refusal could well prove fatal to the witness's credibility.
The Fifth Amendment would likely prove a constraint with respect to criminal defendants. The fact that a defendant has invoked the Fifth Amendment's privilege against self-incrimination cannot normally be admitted into evidence or considered by the trier of fact. Otherwise, the courts have held, the defendant would be penalized for having invoked the privilege. A defendant who takes the stand might well be held to have waived that right and so might be impeached by his refusal to undergo truth testing. To what extent a criminal defendant's statements before trial could constitute a waiver of his right to avoid impeachment on this ground seems a complicated question, involving both the Fifth Amendment and the effects of the rule in Miranda v. Arizona.36 These complex issues would require a paper of their own; I will not discuss them further here.
Apart from a defendant in a criminal trial, it would seem that any other witnesses should be impeachable for their refusal to be truth tested; they might invoke the privilege against self-incrimination but the trier of fact, in weighing their credibility in this trial, would not be using that information against them. And this should be true for prosecution witnesses as well as defense witnesses. Both parties and non-party witnesses at civil trials would seem generally to be impeachable for their refusal to be truth-tested, except in some jurisdictions that hold that a civil party's invocation of the Fifth Amendment may not be commented upon even in a civil trial.
It seems unlikely that a witness's willingness to undergo truth testing would add anything to the results of a test in most cases. It might, however, be relevant, and presumably admissible, if for some reason the test did not work on that witness or, unbeknownst to the witness at the time she made the offer, the test results turned out to be inadmissible.
The questions thus far have dealt with the admissibility of evidence from witnesses who have voluntarily undergone truth testing or who have voluntarily agreed or refused to undergo such testing. Could, or should, either side have the power to compel a witness to undergo either method of truth testing? At its simplest, this might be a right to re-test a witness tested by the other side, a claim that could be quite compelling if the results of these methods, like the results of polygraphy, were believed to be significantly affected by the means by which it was administered — not just the scientific process but the substance and style of the questioning. More broadly, could either side compel a witness, in a criminal or a civil case, to undergo such truth testing as part of either a courtroom examination or in pretrial discovery?
Witnesses certainly can be compelled to testify, at trial or in deposition. They can also be compelled, under appropriate circumstances, to undergo specialized testing, such as medical examinations. (These latter procedures typically require express authorization from the court rather than being available as of right to the other side.) Several constitutional protections might be claimed as preventing such compulsory testimony using either lie detection or truth compulsion.
A witness might argue that the method of truth testing involved was so great an intrusion into the person's bodily (or mental) integrity as to "shock the conscience" and violate the Fifth or Fourteenth Amendment, as did the stomach pumping in Rochin v. California.37 A test method involving something like the wearing of headphones might seem quite different from one involving an intravenous infusion of a drug or envelopment in the coffin-like confines of a full-sized MRI machine. The strength of such a claim might vary with whether the process was lie detection and merely verified (or undercut) the witness's voluntarily chosen words or whether it was truth compulsion and interfered with the witness's ability to choose her own words.
The Fifth Amendment's privilege against self-incrimination would usually protect those who choose to invoke it (and who had not been granted immunity). As noted above, that would not necessarily protect either a party in a civil case or a non-defendant witness in a criminal case from impeachment for invoking the privilege.
Would a witness have a possible Fourth Amendment claim that such testing, compelled by court order, was an unreasonable search and seizure by the government? I know of no precedent for considering questioning itself as a search or seizure, but this form of questioning could be seen as close to searching the confines of the witness's mind. In that case, would a search warrant or other court order suffice to authorize the test against a Fourth Amendment claim? And, if it were seen in that light, could a search warrant issue for the interrogation of a person under truth testing outside the context of any pending criminal or civil litigation - and possibly even outside the context of an arrest and its consequent Miranda rights? If this seems implausible, consider what an attractive addition statutory authorization of such "mental searches" might seem to the Administration or the Congress in the next version of the USA PATRIOT Act.38
In some circumstances, First Amendment claims might be plausible. Truth compulsion might be held to violate in some respects the right not to speak, although the precedents on this point are quite distant, involving a right not to be forced to say, or to publish, specific statements. It also seems conceivable that some religious groups could object to these practices and might be able to make a free exercise clause argument against such compelled speech.
These constitutional questions are many and knotty. Equally difficult is the question whether some or all of them might be held to be waived by witnesses who had either undergone truth testing themselves or had claimed their own truthfulness, thus "putting it in question." And, of course, even if parties or witnesses have no constitutional rights against being ordered to undergo truth testing, that does not resolve the policy issue of whether such rights should exist as a matter of statute, rule, or judicial decision.
Parties and witnesses are not the only relevant actors in trials. Truth testing might also be used in voir dire. Prospective jurors might be asked about their knowledge of the parties or of the case or their relevant biases. Could a defendant claim that his right to an unbiased juror was infringed if such methods were not used and hence compel prospective jurors to undergo truth testing? Could one side or the other challenge for cause a prospective juror who was unwilling to undergo such testing? In capital cases, jurors are asked whether they could vote to convict in light of a possible death penalty; truth testing might be demanded by the prosecution to make sure the prospective jurors are being honest.
It is also worth considering how the existence of such methods might change the pretrial maneuvers of the parties. Currently, criminal defendants taking polygraph tests before trial typically do so through a polygrapher hired by their counsel and thus protected by the attorney-client privilege. Whatever rules are adopted concerning the admissibility of evidence from truth testing will undoubtedly affect the incentives of the parties, in civil and criminal cases, to undergo truth testing. This may, in turn, have substantial, and perhaps unexpected, repercussions for the practices of criminal plea bargaining and civil settlement. As the vast majority of criminal and civil cases are resolved before trial, the effects of truth testing could be substantial.
Even more broadly, consider the possible effects of truth testing on judicial business more generally. Certainly not every case depends on the honesty of witness testimony. Some hinge on conclusions about reasonableness or negligence; others are determined by questions of law. Even factual questions might be the focus of subjectively honest, but nevertheless contradictory, testimony from different witnesses. Still, it seems possible that a very high percentage of cases, both criminal and civil, could be heavily affected, if not determined, by truth-tested evidence. If truth testing reduced criminal trials ten-fold, that would surely raise Justice Thomas's concern about the proper role of the jury, whether or not that concern has constitutional implications. It would also have major effects on the workload of the judiciary and, perhaps, on the structure of the courts.
The questions raised by a perfect method of truth testing are numerous and complicated. They are also probably unrealistic given that no test will be perfect. Most of these questions would require reconsideration if truth testing turned out to be only 99.9% accurate, or 99% accurate, or 90% accurate. That reconsideration would have to consider not just overall "accuracy" but the rates of both false positives (the identification of a false statement as true) and false negatives (the identification of a true statement as false), as those may have different implications. Similarly, decisions on admissibility might differ if accuracy rates varied with a witness's age, sex, training in "beating" the machine, or other traits. And, of course, proving the accuracy of such methods as they are first introduced or as they are altered will be a major issue in court systems under the Daubert or Frye tests.
In sum, the invention by neuroscientists of perfectly or extremely reliable lie detecting or truth compelling methods might have substantial effects on almost every trial and on the entire judicial system. How those effects would play out in light of our current criminal justice system, including the constitutional protections of the Bill of Rights, is not obvious.
Evidence produced by neuroscience may play other significant roles in the courtroom. Consider the possibility of testing, through neuroimaging, whether a witness or a juror reacts negatively to particular groups. Already, neuroimaging work is going on that looks for — and finds — differences in a subject's brain's reaction to people of different races. If that research is able to associate certain patterns of activity with negative bias, its possible use in litigation could be widespread.
As with truth testing, courts would have to decide whether bias testing met Daubert or Frye, whether voluntary test results would be admissible, whether a party's or witness's refusal or agreement to take the test could be admitted into evidence, and whether the testing could ever be compelled. The analysis on these points seems similar to that for truth testing, with the possible exception of a lesser role for the privilege against self-incrimination.
If allowed, neuroscience testing for racial bias might be used where bias was a relevant fact in the case, as in claims of employment discrimination based on race. It might be used to test any witness for bias for or against a party of a particular race. It might be used to test jurors to ensure that they were not biased against the parties because of their race. One could even, barely, imagine it being used to test judges for bias, perhaps as part of a motion to disqualify for bias. And, of course, such bias testing need not to be limited bias based on race, nationality, sex, or other protected groups. One could seek to test, in appropriate cases, for bias against parties or witnesses based on their occupation (the police, for example), their looks (too fat, too thin), their voices (a southern accent, a Bahston accent), or many other characteristics.
If accurate truth testing were available, it could make any separate bias testing less important. Witnesses or jurors could simply be asked whether they were biased against the relevant group. On the other hand, it is possible that people might be able to answer honestly that they were not biased, when they were in fact biased. Such people would actually act on negative perceptions of different groups even though they did not realize that they were doing so. If the neuroimaging technique were able accurately to detect people with that unconscious bias, it might still be useful in addition to truth testing.
Bias testing might even force us to re-evaluate some truisms. We say that the parties to litigation are entitled to unbiased judges and juries, but we mean that they are entitled to judges and juries that are not demonstrably biased in a context where demonstrating bias is difficult. What if demonstrating bias becomes easy — and bias is ubiquitous? Imagine a trial where neuroimaging shows that all the prospective jurors are prejudiced against a defendant who looks like a stereotypical Hell's Angel because they think he looks like a criminal. Or what if the only potential jurors who didn't show bias were themselves members of quasi-criminal motorcycle gangs? What would his right to a fair trial mean in that context?
Evaluating or Eliciting Memory
The two methods discussed so far involve analyzing (or in the case of truth compulsion, creating) a present state of mind. It is conceivable that neuroscience might also provide courts with at least three relevant tools concerning memory. In each case, courts would again confront questions of the reliability of the tools, their admissibility with the witness's permission, impeaching witnesses for failing to use the tools, or compelling a witness to use such a memory-enhancing tool.
The first tool might be an intervention, pharmacological or otherwise, that improved a witness's ability to remember events. It is certainly conceivable that researchers studying memory-linked diseases might create drugs that help people retrieve old memories or retrieve them in more detail. This kind of intervention would not be new in litigation. The courts have seen great controversy over the past few years over "repressed" or "recovered" memories, typically traumatic early childhood experiences brought back to adult witnesses by therapy or hypnosis. Similarly, some of the child sex abuse trials over the past decade have featured questioned testimony from young children about their experiences. In both cases, the validity of these memories has been questioned. We do know from research that people often will come to remember, in good faith, things that did not happen, particularly when those memories have been suggested to them.39 Similar problems might arise with "enhanced" memories.40
A second tool might be the power to assess the validity of a witness's memory. What if neuroscience could give us tools to distinguish between "true" and "false" memory? One could imagine different parts of a witness's brain being used while recounting a "true" memory, a "false" memory, or a creative fiction. Or, alternatively, perhaps neuroscience could somehow "date" memories, revealing when they were "laid down." These methods seem more speculative than either truth testing or bias testing, but, if either one (or some other method of testing memory) turned out to be feasible, courts would, after the Daubert or Frye hearings, again face questions of admitting testimony concerning their voluntary use, allowing comment on a witness's refusal to take the test, and possibly compelling their use.
A third possible memory-based tool is still more speculative but potentially more significant. There have long been reports that electrical stimulation can, sometimes, trigger a subject to have what appears to be an extremely detailed and vivid memory of a past scene, almost like reliving the experience. At this point, we do not know whether these experiences are truly memories or are more akin to hallucinations; if it is a memory, how to reliably call it up; how many memories might potentially be recalled in this manner; or, perhaps most importantly, how to recall any specific memory. Whatever filing system the brain uses for memories seems to be, at this point, a mystery. Assume that it proves possible to cause a witness to recall a specific memory in its entirety, perhaps by localizing the site of the memory first through neuroimaging the witness while she calls up her own existing memories of the event. A witness could then, perhaps, relive an event important to trial, either before trial or on the witness stand. One could even, just barely, imagine a technology that might be able to "read out" the witness's memories, intercepted as neuronal firings, and translate it directly into voice, text, or the equivalent of a movie for review by the finder of fact. Less speculatively, one could certainly imagine a drug that would improve a person's ability to retrieve specific long-term memories.
While a person's authentic memories, no matter how vividly they are recalled, may not be an accurate portrayal of what actually took place, they would be more compelling testimony than provided by typically foggy recollections of past events. Once again, if the validity of these methods were established, the key questions would seem to be whether to allow the admission of evidence from such a recall experience, voluntarily undertaken; whether to admit the fact of a party's or witness's refusal or agreement to use such method; and whether, under any circumstances, to compel the use of such a technique.41
Other Litigation-Related Uses
Neuroscience covers a wide range of brain-related activities. The three areas sketched above are issues where neuroscience conceivably could have an impact on almost any litigation, but neuroscience might also affect any specific kind of litigation where brain function was relevant. Consider four examples.
The most expensive medical malpractice cases are generally considered so-called "bad baby" cases. In these cases, children are born with profound brain damage. Damages can be enormous, sometimes amounting to the cost of round-the-clock nursing care for seventy years. Evidence of causation, however, is often very unclear. The plaintiff parents will allege that the defendants managed the delivery negligently, which led to a lack of oxygen that in turn caused the brain damage. Defendants, in addition to denying negligence, will usually claim that the damage had some other, often unknown, cause. Jurors are left with a family facing a catastrophic situation and no strong evidence about what caused it. Trial verdicts, and settlements, can be extremely high, accounting in part for the high price of malpractice insurance for obstetricians. If neuroscience would reliably distinguish between brain damage caused by oxygen deprivation near birth and that caused earlier, these cases would have more accurate results, in terms of compensating only families where the damage was caused around delivery. Similarly, if fetal neuroimaging could reveal serious brain damage before labor, those images could be evidence about the cause of the damage. (One can even imagine obstetricians insisting on prenatal brain scans before delivery in order to establish a baseline.) By making the determination of causation more certain, it should also lead to more settlements and less wasteful litigation. (Of course, in cases where neuroscience showed that the damage was consistent with lack of oxygen around delivery, the defendants' negligence would still be in question.)
In many personal injury cases, the existence of intractable pain may be an issue. In some of those cases there may be a question whether the plaintiff is exaggerating the extent of the pain. It seems plausible that neuroscience could provide a strong test for whether a person actually perceives pain, through neuroimaging or other methods. It might be able to show whether signals were being sent by the sensory nerves to the brain from the painful location on the plaintiff's body. Alternatively, it might locate a region of the brain that is always activated when a person feels pain or a pattern of brain activation that is always found during physically painful experiences. Again, by reducing uncertainty about a very subjective (and hence falsifiable) aspect of a case, neuroscience could improve the litigation system.
A person's competency is relevant in several legal settings, including disputed guardianships and competency to stand trial. Neuroscience might be able to establish some more objective measures that could be considered relevant to competency. (It might also reveal that what the law seems pleased to regard as a general, undifferentiated competency does not, in fact, exist.) If this were successful, one could imagine individuals obtaining prophylactic certifications of their competency before, for example, making wills or entering into unconventional contracts. The degree of mental ability is also relevant in capital punishment, where the Supreme Court has recently held that executing the mentally retarded violates the Eighth Amendment.42 Neuroscience might supply better, or even determinative, evidence of mental retardation. Or, again, it may be that neuroscience would force the courts to recognize that "mental retardation" is not a discrete condition.
Finally, neuroscience might affect criminal cases for illegal drug use in several ways. Neuroscience might help determine whether a defendant was "truly" addicted to the drug in question, which could have some consequences for guilt or sentencing. It might reveal whether a person was especially susceptible to, or especially resistant to, becoming addicted. Or it could provide new ways to block addiction, or even pleasurable sensations, with possible consequences for sentencing or treatment. Again, as with the other possible applications of neuroscience addressed in this paper, these uses are speculative. It would be wrong to count on neuroscience to solve, deus ex machina, our drug problems. It does not seem irresponsible, however, to consider the possible implications of neuroscience breakthroughs in this area.43
III. CONFIDENTIALITY AND PRIVACY
I am using these two often conflated terms to mean different things. I am using "confidentiality" to refer to the obligation of a professional or an entity to limit appropriately the availability of information about people (in this context, usually patients or research subjects). "Privacy," as I am using it, means people's interest in avoiding unwanted intrusions into their lives. The first focuses on limiting the distribution of information appropriately gathered; the second concerns avoiding intrusions, including the inappropriate gathering of information. Neuroscience will raise challenges concerning both concepts.
Maintaining —and Breaking — Confidentiality
Neuroscience may lead to the generation of sensitive information about individual patients or research subjects, information whose distribution they may wish to see restricted. Personal health information is everywhere protected in the United States, by varying theories under state law, by new federal privacy regulations under the Health Insurance Portability and Accountability Act (HIPAA),44 and by codes of professional ethics. Personal information about research subjects must also be appropriately protected under the Common Rule, the federal regulation governing most (but not all) biomedical research in the United States.45 The special issue with neuroscience-derived information is whether some or all of it requires additional protection.
Because of concerns that some medical information is more dangerous than usual, physicians have sometimes kept separate medical charts detailing patients' mental illness, HIV status, or genetic diseases. Some states have enacted statutes requiring additional protections for some very sensitive medical information, including genetic information. Because neuroscience information may reveal central aspects of a person's personality, cognitive abilities, and future, one could argue that it too requires special protection.
Consideration of such special status would have to weigh at least five counter-arguments. First, any additional recordkeeping or data protection requirements both increase costs and risk making important information unavailable to physicians or patients who need it. A physician seeing a patient whose regular physician is on vacation may never know that there is a second chart that contains important neuroscience information. Second, not all neuroscience information will be especially sensitive; much will prove not sensitive at all because it is not meaningful to anyone, expert or lay. Third, defining "neuroscience information" will prove difficult. Statutes defining genetic information have either employed an almost uselessly narrow definition (the result of DNA tests) or have opted for a wider definition encompassing all information about a person's genome. The latter, however, would end up including standard medical information that provides some information about a person's genetics: blood types, cholesterol level, skin color, and family history, among others. Fourth, mandating special protection for a class of information sends the message that the information is especially important even if it is not. In genetics, it is argued that legislation based on such "genetic exceptionalism" increases a false and harmful public sense of "genetic determinism." Similar arguments might apply to neuroscience. Finally, given the many legitimate and often unpredictable needs for access to medical information, confidentiality provisions will often prove ineffective at keeping neuroscience information private, especially from the health insurers and employers who are paying for the medical care. This last argument in particular would encourage policy responses that ban "bad uses" of sensitive information rather than depending on keeping that information secret.
Laws and policies on confidentiality also need to consider the limits on confidentiality. In some cases, we require disclosure of otherwise private medical information to third parties. Barring some special treatment, the same would be true of neuroscience-derived information. A physician (including, perhaps, a physician-researcher) may have an obligation to report to a county health agency or the Centers for Disease Control neuroscience-derived information about a patient that is linked to a reportable disease (an MRI scan showing, for example, a case of new variant Creutzfeldt-Jakob disease, the human version of "mad cow disease"); to a motor vehicle department information linked to loss-of-consciousness disorders; and to a variety of governmental bodies information leading to a suspicion of child abuse, elder abuse, pesticide poisoning, or other topics as specified by statute. In some cases, it might be argued, as it has been in genetics, that a physician has a responsibility to disclose a patient's condition to a family member if the family member is at higher risk of the same condition as a result. Finally, neuroscience information showing an imminent and serious threat from a patient to a third party might have to be reported under the Tarasoff doctrine.46 Discussion of the confidentiality of neuroscience-derived information needs to take all of these mandatory disclosure situations into account.
Privacy Protections Against Mental Intrusions
Privacy issues, as I am using the term in this paper, would arise as a result of neuroscience through unconsented and inappropriate intrusions into a person's life. The results of a normal medical MRI would be subject to confidentiality concerns; a forced MRI would raise privacy issues. Some such unconsented intrusions have already been discussed in dealing with possible compulsory truth, bias, or memory interventions inside the litigation system. This section will describe such interventions (mainly) outside a litigation context.
Intrusions by the government are subject to the Constitution and its protections of privacy, contained in and emanating from the penumbra of the Bill of Rights. Whether or not interventions were permitted in the courtroom, under judicial supervision, the government might use them in other contexts, just as polygraphs are used in security clearance investigations. All of these non-litigation governmental uses share a greater possibility of abuse than the use of such a technology in a court-supervised setting.
Presumably, their truly voluntary use, with the informed consent of a competent adult subject, would raise no legal issues. Situations where agreement to take the test could be viewed as less than wholly voluntary would raise their own set of sticky problems about the degree of coercion. Consider the possibility of truth tests for those seeking government jobs, benefits, or licenses. Admission to a state college (or eligibility for government-provided scholarships or government-guaranteed loans) might, for example, be conditioned on passing a lie detection examination on illegal drug use.
Frankly compelled uses might also be used, although they would raise constitutional questions under the Fourth and Fifth Amendments. One could imagine law enforcement officials deciding to interrogate one member of a criminal gang under truth compulsion in violation of Miranda and of the Fifth Amendment (and hence to forego bringing him to trial) in order to get information about his colleagues. Even if a person had been given a sufficiently broad grant of immunity to avoid any Fifth Amendment issues, would that really protect the interests of a person forced to undergo a truth compulsion process? Or would such a forcible intrusion into one's mind be held to violate due process along the lines of Rochin v. California?47
Of course, even if the interrogated party could bring a constitutional tort claim against the police, how often would such a claim be brought? And would we — or our courts — always find such interrogations improper? Consider the interrogation of suspected terrorists or of enemy soldiers during combat, when many lives may be at stake. (This also raises the interesting question of how the U.S. could protect its soldiers or agents from similar questioning).
Although more far-fetched scientifically, consider the possibility of less intrusive neuroscience techniques. What if the government developed a neuroimaging device that could be used at a distance from a moving subject or one that could fit into the arch of a airport metal detector? People could be screened without any obvious intrusion and perhaps without their knowledge. Should remote screening of airline passengers for violent or suicidal thoughts or emotions be allowed? Would it matter whether the airport had signs saying that all travelers, by their presence, consented to such screening?
Private parties have less ability than the government to compel someone to undergo a neuroscience intervention - at least without being liable to arrest for assault. Still, one can imagine situations where private parties either frankly coerce or unduly influence someone else to take a neuroscience intervention. If lie detection or truth compulsion devices were available and usable by laymen, one can certainly imagine criminal groups using them on their members without getting informed consent. Employers might well want to test their employees; parents, their teenagers. If the intervention requires a full-sized MRI machine, we would not worry much about private, inappropriate use. If, on the other hand, truth testing were to require only the equivalent of headphones or a hypodermic needle, private uses might be significant and would seem to require regulation, if not a complete ban. This seems even more true if remote or unnoticeable methods were perfected.
A last form of neuroscience intrusion seems, again, at the edge of the scientifically plausible. Imagine an intervention that allowed an outsider to control the actions or motions, and possibly even the speech, emotions, or thoughts, of a person. Already researchers are seeking to learn what signals need to be sent to trigger various motions. Dr. Miguel Nicolelis of Duke University has been working to determine what neural activity triggers particular motions in rats and in monkeys and he hopes to be able to stimulate it artificially.48 One goal is to trigger the implanted electrodes and have the monkey's arm move in a predictable and controlled fashion. The potential benefits of this research are enormous, particularly to people with spinal cord injuries or other interruptions in their motor neurons. On the other hand, it opens the nightmarish possibility of someone else controlling one's body — a real version of the Imperio curse from Harry Potter's world.
Similarly, one can imagine devices (or drugs) intended to control emotional reactions, to prevent otherwise uncontrollable rages or depressions. One could imagine a court ordering implantation of such a device in sexual offenders to prevent the emotions that give rise to their crimes or, perhaps more plausibly, offering such treatment as an option, in place of a long prison term. Castration, an old-fashioned method of accomplishing a similar result, either surgical or chemical, is already a possibility for convicted sex offenders in some states. Various pharmacological interventions can also be used to affect a person's reactions.
These kinds of interventions may never become more than the ravings of victims of paranoia, though it is at least interesting that the Defense Advanced Research Projects Administration (DARPA) is providing $26 million in support of Nicolelis's research through its "Brain-Machine Interfaces" program.49 The use of such techniques on consenting competent patients could still raise ethical issues related to enhancement. Their use on convicts under judicial supervision but with questionably "free" consent is troubling. Their possible use on unconsenting victims is terrifying. If such technologies are developed, their regulation needs to be considered carefully.
Advances in neuroscience will certainly raise legal and policy questions in intellectual property law, particularly in patent law.50 Fortunately, few of those questions seem novel, as most seem likely to parallel issues already raised in genetics. In some important respects, however, the issues seem less likely to be charged than those encountered in genetics.
Two kinds of neuroscience patents seem likely. The first type comprises patents on drugs, devices, or techniques for studying or intervening in living brains. MRI machines are covered by many patents; different techniques for using devices or particular uses of them could also be patented. So, for example, the first person to use an MRI machine to search for a particular atom or molecule might be able to patent that use, unless it were an obvious extension of existing practice. Similarly, someone using an MRI machine, or a drug, for the purpose of determining whether the subject was telling the truth could patent that use of that machine or drug, even if she did not have own a patent on the machine or drug itself.
The second type would be a patent on a particular pattern of activity in the brain. (I will refer to these as "neural pattern patents.") The claims could be that this pattern could be used to diagnose conditions, to predict future conditions, or as an opportunity for an intervention. This would parallel the common approach to patenting genes for diagnosis, for prediction, and for possible gene therapy. Neuroimaging results seem the obvious candidates for this kind of patent, although the patented pattern might show up, for example, as a set of gene expression results revealed by microarrays or gene chips.
I will discuss the likely issues these kinds of patents raise in three categories: standard bioscience patent issues, "owning thoughts," and medical treatments.
Standard Bioscience Patent Issues
Patents in the biological science, especially those relating to genetics, have raised a number of different concerns. Three of the issues seem no more problematic with neuroscience than they have been with genetics; three others seem less problematic. Whether this is troublesome, of course, depends largely on one's assessment of the current state of genetic patents. My own assessment is relatively sanguine; I believe we are muddling through the issues of genetic patents with research and treatment continuing to thrive. I am optimistic, therefore, that none of these standard patent issues will cause broad problems in neuroscience.
Two concerns are based on the fact of the patent monopoly. Some complain that patents allow the patent owner to restrict the use and increase the price of the patented invention, thus depriving some people of its benefits.51 This is, of course, true of all patents and is a core idea behind the patent system: the time-limited monopoly provides the economic returns that encourage inventors to invent. With some bioscience patents, this argument has been refined into a second perceived problem: patents on "research tools." Control over a tool essential to the future of a particular field could, some say, give the patent owner too much power over the field and could end up retarding research progress. This issue has been discussed widely, most notably in the 1998 Report of the National Institutes of Health (NIH) Working Group on Research Tools, which made extensive recommendations on the subject.52 Some neuroscience patents may raise concerns about monopolization of basic research tools, but it is not clear that those problems cannot be handled if and as they arise.
A third issue concerns the effects of patents on universities. Under the Bayh-Dole Act, passed in 1980, universities and other non-profit organizations where inventions were made using federal grant or contract funds can claim ownership of the resulting inventions, subject to certain conditions. Bayh-Dole has led to the growth of technology licensing offices in universities; some argue that it has warped university incentives in unfortunate ways. Neuroscience patents might expand the number of favored, money-making departments in universities, but seem unlikely to make a qualitative difference.
Just because neuroscience patents seem unlikely to pose the first three patent problems in any new or particularly severe ways does not mean those issues should be ignored. Individual neuroscience patents might cause substantial problems that call for intervention; the cumulative weight of neuroscience patents when added to other bioscience patents may make systemic reform of one kind or another more pressing. But the outlines of the problems are known.
Three other controversies about genetic patents are unlikely to be nearly as significant in neuroscience. They seem relevant, if at all, to neural pattern patents, not to device or process patents.
Two of the controversies grew out of patents on DNA sequences. In 1998 Rebecca Eisenberg and Michael Heller pointed out "the tragedy of the anti-commons," the concern that having too many different patents for DNA sequences under different ownership could increase transaction costs so greatly as to foreclose useful products or research.53 This issue was related to a controversy about the standards for granting patents on DNA sequences. Researchers were applying for tens of thousands of patents on small stretches of DNA without necessarily knowing what, if anything, the DNA did. Often these were "expressed sequence tags" or "ESTs," stretches of DNA that were known to be in genes and hence to play some role in the body's function because they were found in transcribed form as messenger RNA in cells. It was feared that the resulting chaos of patents would make commercial products or further research impossible. This concern eventually led the Patent and Trademark Office to issue revised guidelines tightening the utility requirement for gene patents.
However strong or weak these concerns may be in genetics, neither issue seems likely to be very important in neuroscience (except of course in neurogenetics). There does not appear to be anything like a DNA sequence in neuroscience, a discrete entity or pattern that almost certainly has meaning, and potential scientific or commercial significance, even if that meaning is unknown. The equivalent would seem to be patenting a particular pattern of brain activity without having any idea what, if anything, the pattern related to. That was plausible in genetics because the sequence could be used as a marker for the still unknown gene; nothing seems equivalent in neuroscience. Similarly, it seems unlikely that hundreds or thousands of different neural patterns, each patented by different entities, would need to be combined into one product or tool for commercial or research purposes.
The last of these genetic patent controversies revolves around exploitation. Some have argued that genetic patents have often stemmed from the alleged inventors' exploitation of individuals or indigenous peoples who provided access to or traditional knowledge about medicinal uses of living things, who had created and maintained various genetically varied strains of crops, or who had actually provided human DNA with which a valuable discovery was made. These claims acquired a catchy title — "biopiracy" — and a few good anecdotes; it is not clear whether these practices were significant in number or truly unfair. Neuroscience should face few if any such claims. The main patterns of the research will not involve seeking genetic variations from crops or other living things, nor does it seem likely (apart from neurogenetics) that searches for patterns found in unique individuals or distinct human populations will be common.
Patents on human genes have been extremely controversial for a wide variety of reasons. Some have opposed them for religious reasons, others because they were thought not to involve true "inventions," others because they believed human genes should be "the common heritage of mankind," and still others because they believe such gene patents "commodify" humans. (Similar but slightly different arguments have raged over the patentability of other kinds of human biological materials or of non-human life-forms.) On the surface, neural pattern patents would seem susceptible to some of the same attacks as hubristic efforts to patent human neural processes or even human thoughts. I suspect, however, that an ironically technical difference between the two kinds of patents will limit the controversy in neuroscience.
Patents on human genes — or, more accurately, patents on DNA or RNA molecules of specified nucleotide sequences — are typically written to claim a wide range of conceivable use of those sequences. A gene patent, for example, might claim the use of a sequence to predict, to diagnose, or to treat a disease. But it will also claim the molecule itself as a "composition of matter." The composition of matter claim gives the owner rights over any other uses of the sequence even though he has not foreseen them. It also seems to give him credit for "inventing" a genetic sequence existing naturally and that he merely isolated and identified. It is the composition of matter claims that have driven the controversy over gene patents. Few opponents claim that the researchers who, for example, discovered the gene linked to cystic fibrosis should not be able to patent beneficial uses of that gene, such as diagnosis or treatment. It is the assertion of ownership of the thing itself that rankles even though that claim may add little value to the other "use" claims.
Neural pattern patents would differ from gene patents in that there is no composition of matter to be patented. The claim would be to certain patterns used for certain purposes. The pattern itself is not material — it is not a structure or a molecule — and so should not be claimable as a "composition of matter." Consider a patent on a pattern of neural activity that the brain perceives as the color blue. A researcher might patent the use of the pattern to tell if someone was seeing blue or perhaps to allow a person whose retina did not perceive blue to "see" blue. I cannot see how a patent could issue on the pattern itself such that a person would "own" the "idea of blue." Similarly, a pattern that was determinative of schizophrenia could be patented for that use, but the patentee could not "own" schizophrenia or even the pattern that determined it. If a researcher created a pattern by altering cells, then he could patent, as a composition of matter, the altered cells, perhaps defined in part by the pattern they created. Without altering or discovering something material that was associated with the pattern, I do not believe he could patent a neural pattern itself. The fact that neural pattern patents will be patents to uses of the patterns, not for the patterns themselves, may well prevent the kinds of controversies that have attended gene patents.
Patents and Medical Treatment
Neuroscience "pattern" patents might, or might not, run into a problem genetics patents have largely avoided: the Ganske-Frist Act. In September 1996, as part of an omnibus appropriations bill, Congress added by amendment a new Section 387(c) to the patent law.
This section states that
With respect to a medical practitioner's performance of a medical activity that constitutes an infringement under section 271(a) or (b) of this title, the provisions of sections 281, 283, 284, and 285 of this title shall not apply against the medical practitioner or against a related health care entity with respect to such medical activity.54
This section exempts a physician and her hospital, clinic, HMO, or other "related health care entity" from liability for damages or an injunction for infringing a patent during the performance of a "medical activity." The amendment defines "medical activity" as "the performance of a medical or surgical procedure on a body," but it excludes from that definition " the use of a patented machine, manufacture, or composition of matter in violation of such patent,  the practice of a patented use of a composition of matter in violation of such patent, or  the practice of a process in violation of a biotechnology patent."55 The statute does not define "a biotechnology patent."
Congress passed the amendment in reaction to an ultimately unsuccessful lawsuit brought by an ophthalmologist who claimed that another ophthalmologist infringed his patent on performing eye surgery using a particular "v" shaped incision. Medical procedure patents had been banned in many other countries and had been controversial in the United States for over a century; they had, however, clearly been allowed in the United States since 1954.56
Consider a neural pattern patent that claimed the use of a particular pattern of brain activity in the diagnosis or as a guide to the treatment of schizophrenia.57 A physician using that pattern without permission would not be using "a patented machine, manufacture, or composition of matter in violation of such patent." Nor would she be engaged in "the practice of a patented use of a composition of matter in violation of such patent." With no statutory definition, relevant legislative history, or judicial interpretation, it seems impossible to tell whether she would be engaged in the "practice of a process in violation of a biotechnology patent." Because molecules, including DNA, RNA, and proteins, can be the subjects of "composition of matter patents," most genetic patents should not be affected by the Ganske-Frist Act.58 Neural pattern patents might be. It is, of course, quite unclear how significant an influence this exception for patent liability might have in neuroscience research or related medical practice.
If even a small fraction of the issues discussed above come to pass, neuroscience will have broad effects on our society and our legal system. The project to which this paper contributes can help in beginning to sift out the likely from the merely plausible, the unlikely, and the bizarre, both in the expected development of the science and in the social and legal consequences of that science. Truly effective prediction of upcoming problems — and suggestions for viable solutions — will require an extensive continuing effort. How to create a useful process for managing the social and legal challenges of neuroscience is not the least important of the many questions raised by neuroscience.