You are here

Quotes by Experts

Jeremy Rifkin:
“Anti-vivisection societies and animal rights organizations have been making this argument for a long time, only to be scorned by scientific bodies, medical associations, and industry lobbies who accuse them of being anti-progress and caring more about animals than people.  Now, it is the scientific establishment that has come to the very same conclusions.  Toxicity testing in animals is bad science.”
_________________________________________

Nature 10/11/05:
“Scientists at the European Centre for the Validation of Alternative Methods (ECVAM) in northern Italy — which was set up by the European Commission to develop alternatives to animal testing — argue that animal tests are badly flawed. They say the new drive for alternative methods will improve the science of toxicity testing. And public safety demands that the new tests are shown to be better predictors of toxicity than the existing methods.”

__________________________________________

Lancet 04/06/2011:
“"A fundamental problem is that a rat is not a human. They are different sizes, have different metabolisms and have different diets so using animals to predict effects on humans is difficult. Fifty percent of compounds that prove to be safe in rats prove not to be safe in humans so it really is the toss of a coin," Dexter told Sky News.”
“It is increasingly clear that an important factor contributing to these problems is the over-reliance of the pharmaceutical industry on the use of animals to predict drug behaviour in man. The stark differences, not only in the diseases of different animal species, but also the ways that they respond to drugs, are now well known. Many studies have shown that animal tests frequently fail to translate to the clinic, with estimates of their ability to predict effects on people as low as 37—50%, or no better than the toss of a coin.”

__________________________________________

Thomas Hartung:
“But the toxicology tests on which regulators rely to gather this information are stuck in a time warp, and are largely based on wasteful and often poorly predictive animal experiments”
The toxicity tests that have been used for decades are “simply bad science”, he explains. “We now have an opportunity to start with a clean slate and develop evidencebased tests that have true predictive value.”
“To test a chemical for its potential to cause cancer takes five years and involves 400 rats. More than 50% of the results are positive, of which 90% are false positives.”

__________________________________________

David Biello in Scientific American (13.10.2011):
"We are screening 10,000 chemicals using these rapid tests to characterize the bioactivity of the chemicals to predict their hazard and to use that information to prioritize for further screening and testing," says biologist David Dix, deputy director of EPA's National Center for Computational Toxicology. "We can test a lot of chemicals with a lot of repetitions at a lot of different concentrations."
The program, initially started at EPA as ToxCast to assess 1,000 chemicals (and known as Tox21 in its expanded form), employs a robot to speed chemical screening. On plastic plates filled with 1,536 tiny wells, the robot drops varying amounts of different chemicals onto human cells and human proteins. Essentially, each plate has 1,536 experiments underway at the same time. "In a stack of 100, we have 150,000 combinations of chemicals and targets," Dix says.
The robot arm and its numerous five- to 10-microliter wells replace the old standby of toxicology—animal testing. In addition to being slow and controversial, animal tests do not reveal how a chemical might impact humans, nor do they deliver any insight into the mechanisms by which a given chemical produced toxic outcomes. Simply by running the robotic tests, the EPA and its partner agencies will generate more information on chemical toxicity in the next few years than has been created in the past century. The effort has already screened more than 2,500 chemicals, including the dispersants employed to clean up BP's 2010 oil spill in the Gulf of Mexico.
The new information may allow toxicology to evolve from a reactive science to a predictive one; models of liver toxicity based on chemical testing, for example, could predict how new chemicals would interact with the liver, based on molecular structure and other information. Already, ToxCast scientists have made such a predictive model for liver toxicity: It forecast accurately tumor formation in rats and mice that had been exposed for two years to certain chemicals. A similar effort proved accurate for reproductive toxicity, including vascular development and endocrine disruption—an area of keen interest for human exposure to chemicals such as bisphenol A (BPA).
In addition, the high-speed robotic testing will allow toxicologists to better understand mixture and low-dose effects by testing both combinations of chemicals for additive damage as well as how, for example, 15 different concentrations of a given chemical impact human cells. "We suspect that when we look at 10,000 chemicals we'll see a lot of activity that we didn't know about," Dix says of the two-year effort, in which the EPA has partnered with a handful of federal health agencies.
"For a lot of chemicals, there's no requirement for animal toxicity testing or any other type of testing," Dix notes. "Tox21 is going to provide information where there is no information."

__________________________________________

Vittorio Prodi:
“Toxicity testing is not delivering what safety of products demands nor is it sufficiently relying upon the most advanced technologies. It typically involves studying adverse health outcomes in animals subjected to high doses of toxicants with subsequent extrapolation to expected human responses at lower doses. But we are not 70kg rats feeding largely on chemicals. The system is expensive, time-consuming, low-throughput and often provides results of limited predictive value for human health. The toxicity testing methods are largely the same for industrial chemicals, pesticides and drugs, and have led to a backlog of more than 80,000 chemicals to which humans are potentially exposed but whose potential toxicity remains largely unknown.
In the US, a new toxicity testing plan has been launched which includes the use of predictive, high-throughput cell-based assays (of human origin) to evaluate perturbations in key pathways of toxicity, and to conduct targeted testing against those pathways. Mapping the entirety of these pathways (hence the 'Human Toxome Project') could be a large-scale effort, perhaps on the order of the Human Genome Project. It could develop tremendous opportunities for REACH, the testing ban for cosmetics, the pesticide regulation, and the endocrine disruptor screening, while reducing animal suffering. How can Europe contribute to this goal?”

__________________________________________

Francis Collins, director, NIH’s National Human Genome Research Institute, 2008:
“Animal experimentation is “expensive, time-consuming, uses animals in large numbers, and it doesn’t always work.””
__________________________________________

Samuel Wilson, acting director of the National Institute of Environmental Health Sciences and NTP:
“The new research model would allow scientists to test 100,000 compounds in 1,500 different concentrations in about two days compared with years if the testing was done on animals.”
__________________________________________

Francis Collins in The Scientist:
“With earlier and more rigorous target validation in human tissues, it may be justifiable to skip the animal model assessment of efficacy altogether.”
__________________________________________

Science 15-02-2008
Francis S. Collins, George M. Gray and John R. Bucher
“We propose a shift from primarily in vivo animal studies to in vitro assays, in vivo assays with lower organisms, and computational modeling for toxicity assessments.”

__________________________________________

Allison Abbott in Nature 10/11/2005:
“Most animal tests overor underestimate toxicity, or simply don’t mirror toxicity in humans very well.”
“Commercial and political pressures are pushing for a halt to the use of animals in toxicology tests in Europe. This change will also mean a move towards better science, says Alison Abbott.”

__________________________________________

Horst Spielmann:
“Animal embryotoxicity tests are not reliably predictive for humans,” says Horst Spielmann, a toxicologist at the Federal Institute for Risk Assessment in Berlin. “When we find that cortisone is embryotoxic in all species tested except human, what are we supposed to make of them?”
__________________________________________

Pandora Pound in British Medical Journal:
“Ideally, new animal studies should not be conducted until the best use has been made of existing animal studies and until their validity and generalisability to clinical medicine has been assessed.”
__________________________________________

John Prineas and Michael Barnett in New Scientist:
“Their findings back the view that the reason for the lack of progress in this field is that most Multiple Sclerosis research is done on mice with a disease that is actually quite different”
__________________________________________

National Institute of Environmental Health Sciences:
“A second argument against selection bias is that knowledge to predict carcinogenicity in rodent tests is highly imperfect, even now, after decades of testing results have become available on which to base prediction.”
__________________________________________

Robert Sharpe:
“Most adverse reactions which can occur in patients cannot be demonstrated, anticipated or avoided by the routine subacute and chronic toxicity experiment” (Zbinden 1966).
__________________________________________

Honess et al 2004:
“More long-tailed macaques (Macaca fascicularis) than any other primate are imported into the UK for research, and journey times may be of up to 58 h.”
__________________________________________

Erwin, Drake and Deni – 1979:
“The subjects were housed individually 1-m3 wire cages. All were kept in the same colony room and were exposed to identical environmental conditions.”
__________________________________________

X.S. Puente 2006:
“Despite the high conservation of cancer genes between both species, we identified 20 genes containing several codon insertions or deletions in their protein coding regions, although the functional significance of these differences, including their putative association with cancer, will require further studies.”
__________________________________________

Yasuhiro 2009:
“Animals captured and bred in Vietnam for instance may respond differently in toxicological or immunological studies to those originating in the Philippines or in Mauritius”
__________________________________________

7th World Congress on Alternatives & Animal Use in Life Sciences (Conclusive Press Release):
“Participants agreed that current knowledge of the human genome and the genomes of many animal species have resulted in such a level of scientific progress in the area of gene mapping and expression (genomics) that it will make it possible in the near future to apply these tools, together with current computational technologies (linking and analysing massive data bases) and sophisticated second generation in vitro test systems, to assess the hazards and risks of chemical and microbiological substances without the use of experimental animals.”
__________________________________________

Robert Matthews 2008:
“It is crucial to know how and why such tests fail to predict what happens in humans.
That can happen in two ways: firstly, where animals fail to warn of real toxic effects in humans - as in thalidomide - and secondly, where they give false alarms, with the animals falling victim to drugs that would be fine in humans.”

__________________________________________

“Toxicity testing in the 21st century: a vision and a strategy” National Research Council of the National Academy of Sciences (U.S.A.) 2007:
"Using the results of animal tests to predict human health effects involves a number of assumptions and extrapolations that remain controversial. Test animals are often exposed to higher doses than would be expected for typical human exposures, requiring assumptions about effects at lower doses or exposures. Test animals are typically observed for overt signs of adverse health effects, which provide little information about biological changes leading to such health effects."