Anti-Precautionary Risk Assessment

Anti-Precautionary Risk Assessment
Vyvyan Howard
Toxico-pathologist, University of Liverpool
(Presentation made at the launch of the Independent Science Panel on 10 May 2003, London)

Vyvyan Howard examined how the regulatory process addresses pervasive technologies and the use of risk assessment therein. Risk assessment is a procedure commonly used in industry to enable licensing of processes. It was invented by engineers, to look at the integrity of physical structures and predict how they would perform. In these cases, the geology, materials used and the geometry of the structure are known, allowing engineers to model the effects of various disasters on the structure. Nonetheless, despite many years of sophisticated modelling based on hard data, unpredictable events still happen. This usually represents a failure of hazard identification.

Howard explained that there are four phases to risk assessment: hazard identification, which requires understanding of the system in question; hazard assessment, which costs time and money for hard science and where positive findings require action; exposure assessment, i.e. who’s being exposed to what, so that any detected effects can be assessed; and risk assessment, which depends on the first three steps. A risk assessment is only as good as the quality of the preceding steps.

Risk assessment is now applied to very complex systems such as ecosystems. However, it is impossible to have comprehensive hazard data for such systems, which have many unknowns. Missing data can either be ignored or modelled – but these models can be subjective. Howard pointed out that there are examples of risk assessments without a single piece of data; instead they are done on a computer with models, and thus one can get any answer one wants!

He gave the example of an extremely complex system, the Scotian shelf food web, involving birds, fish and benthic organisms in the air, ocean and seabed respectively. If someone claims to have done a risk assessment on introducing GM salmon, for example, to such a system, this implies that they understand the system and the impacts of the introduction. However, the interactions between the elements of such a system are highly non-linear and poorly predictable. Furthermore, there could be irreversible change in the balance of the system.

Howard stressed that precaution should be considered as an option, i.e., doing nothing as an option. He referred to the Wingspread definition of the Precautionary Principle: “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken, even if some cause and effect relationship are not fully established scientifically”.

In 1999, a letter to Nature by Holm and Harris claimed that the Precautionary Principle stifles discovery and progress. The authors said that the most dangerous time for the production of GM food was only when the first GM food was made, as that was the time of most uncertainty.

Howard and Peter Saunders wrote a reply, which highlighted that the Precautionary Principle is a tool for decision-making, not to evaluate evidence. It requires us to take into account not just the probability that a technology will be hazardous, but also the benefits, if it succeeds, and the costs, if things go wrong. For example, there may have been a very small probability a large ship travelling at high speed in the North Atlantic would hit an iceberg, but the captain of the Titanic should have thought more about what could happen if it did, and all the more so because it didn’t really matter if the voyage lasted a few more hours. The idea of time is thus an important one in this debate – what speed do we have to go at and what are the drivers for that?

Howard noted that risk assessment is the main tool used to impede the implementation of the Precautionary Principle. It is presented by proponents of a technology to decision-makers as evidence that these technologies are safe. As the reports are very lengthy and technical, the assumption often is that there has been a lot of hard science conducted. The risk assessments are frequently accepted as hard scientific ‘proof’, even if they are based on unrealistic assumptions.

However, on closer examination, risk assessments are often flawed. He cited Monsanto’s report to the Irish EPA, for trials of a glyphosate-resistant sugar beet. Statements related to potential harmful effects or risk of damage list these as ‘low’, ‘negligible’ and ‘effectively zero’. However, there is not much methodology behind these opinions. To Howard, this actually means that the risk is non-zero. But how near to zero, and what it means if it is not zero, depends on the time scale.

He gave the example of the toxicology study presented by industry for Chardon LL GM maize, a fodder maize for cattle. The study had taken purified pat protein (which confers glufosinate resistance) from another plant species (canola), fed it to a non-relevant species (rat) and looked at irrelevant anti-nutrients (phytate, which is of no significance in cattle). It found non-substantial equivalence (e.g. changes in fatty acid expression) but ignored this. The study did not feed cattle the whole plant, ingested in high proportion, which is the realistic application of the product. So while there was a lot of science done, it was not actually relevant or appropriate.

Howard noted that the FDA’s scientists in 1992 were already raising questions about GM food. He quoted Dr Gerald B Guest, Director of the FDA Centre for Veterinary Medicine, who raised the point that a single plant product may constitute a significant proportion (50-75%) of the animal diet, hence the whole plant should be considered, not just the protein in question. He also raised human food safety concerns.

Howard then compared exposure assessment for pharmaceuticals and food. With pharmaceuticals, development costs are high, duration of exposure is short, dose is low and choice is prevalent as drugs are usually taken only when necessary. With food, the lifetime exposure is approximately 70 tonnes, so the dose is many times higher. Shouldn’t we be more circumspect about making novel changes to the food chain?

We are often told that the US population has been eating GM food for many years with no problems. It is true that there are no acute, overt effects, but to detect changes, we need to know where we are starting from (baseline data) and the dose (exposure data). There is no such data from the US, as nobody is monitoring who’s eating what. Thus, if there are any changes to human pathology, one won’t be able to relate them to GM. If GM food was causing changes to common conditions such as allergy, autoimmune disease and cancer, there is no way that we could know. Howard gave the example of thalidomide, which causes a very obvious, unusual condition – if thalidomide caused a much more common condition like cleft palate, the likelihood is that we still wouldn’t know about its adverse effects. If GM food causes changes to the background rate of cancer or allergy, we don’t have the data from the US population to confirm GM food as the cause.

There are many things happening that we can’t fully explain. For example the incidences of all human cancers in the UK is increasing. Experts have suspicions as to the cause, but no absolute proof. The evidence indicates that environmental factors are predominant.

Howard highlighted a report from the European Environment Agency, which provides numerous examples where early warnings of potential hazards were ignored. (Late lessons from early warnings: The precautionary principle 1896-2000. 2002. Edited by: Poul Harremoës, David Gee, Malcolm MacGarvin, Andy Stirling, Jane Keys, Brian Wynne, Sofia Guedes Vaz. Environmental issue report No 22, OPOCE (Office for Official Publications of the European Communities)).

For example, in the case of radiation, the first injuries from exposure were noted in 1896 and the first publication showing radiation damage was in 1904, but it wasn’t till much later that regulatory processes caught up. In 1949, recommendations were made; these were revised in 1977 and in 1990, each time improving recommended dose limits. Even so, subsequent data show that the model assumptions were wrong. In the case of asbestos, it took 100 years between the first warning of harm (in 1898) and its total ban in the EU and France in (1998). Howard stressed that we have to think about our actions and react to anticipate harm.

The time line for GM science is relatively short. The first genetic modification of bacteria with plasmids was carried out in 1975-1977, the first GM plants were produced in 1980-1983 and the first transgenic mice in 1984-1985. In 1994, the Flavr Savr GM tomato paste was commercialised, and the first harvest of transgenic soy was in 1996. Thus, the US population has only been exposed to GM food for about six or seven years. If GM foods do cause growth factor effects, and the time lag for induction of cancers is 20 years, we may have yet to see the full effects of GM foods.

Since GM crops have been commercialised in the US, however, many problems have surfaced. Contamination due to gene flow is inevitable. The risk assessments should not be addressing whether we can keep GM and non-GM crops apart, but should be looking at the consequences of GM genes getting everywhere, and ask if it is safe to proceed on that basis. There has also been gene stacking, regulatory failures and contamination from ‘pharm’ crops. What will the future bring? Howard stressed that we need to think this out carefully as GM is self-replicating pollution.

He then explained what would be needed to test for one area of potential hazard, allergy. Human volunteers, about 3,000, would have to be fed GM food and tested. This hasn’t been done so far, but Howard maintained that this is doable. However, GM companies are very reluctant to conduct clinical trials, relying instead on ‘substantial equivalence’, which is a chemical test of composition, but is not predictive of biological effects. He noted that conceptual shifts in toxicology have occurred – low-level contamination, non-linear dose response curves and background levels are all important. Such changes in the way we think about chemicals should also be applied to food.

He asked how long the reassurances offered in a risk assessment are good for. If there is a low probability, statisticians have a concept called ‘waiting time’; a low probability, given enough time, becomes a certainty. We are dealing with eternity if GM crops are introduced. Is the risk assessment good for 10 minutes, a generation, 50 generations or all time? They are usually silent on this.

Howard stressed that we do always have the option of doing nothing. This is rarely discussed and never put forward by developers of new technologies. Adoption of a precautionary approach will often mean postponing action, but this does not equate to being anti-science. He concluded saying that the ISP is asking for a lot more science before we go ahead with GM technology, as not enough, thorough work has been done.

During the Question and Answer session, Howard emphasised that he would like to see a standard front page to all risk assessments which states clearly what the hazards are, which of these hazards have been tested and which haven’t been tested, and what the level of uncertainty is, to better enable decision-makers to come to a rational decision.

articles post