The biotechnology revolution is impacting all of the biological sciences, and toxicology is no exception. Perusal of the abstracts of the Society’s recent annual meetings confirms that our field is already actively using the powerful new tools of molecular biology to identify and understand the mechanisms of toxic responses. Thus, the Workshop on Toxicogenomics and Risk Assessment sponsored by the Society and discussed in this issue by Cunningham et al. was a timely undertaking.
This was not the first workshop to focus on the application of genomic approaches to risk assessment, but it was unique in a number of respects. First, it brought together thought leaders with a wider breadth of thinking about the application of genomic and molecular technologies than have other workshops. The toxicology community has rightly focused much attention on the potential of array-based methods of monitoring gene transcription to provide 'global monitoring' of responses to toxic insults. Clearly, this is an exciting and important new technology, as was emphasized by several of the workshop speakers. However, other important aspects of genomic technologies were also brought out at this meeting. These include the advent of haplotype markers that will greatly facilitate the identification of biological traits controlled by genetic variation (polymorphisms), and the potential to exploit proteomic and metabonomic technologies to develop improved serum markers of cellular injury.
To me, one of most important revelations of the human genome sequence is the fact that single nucleotide polymorphisms (SNPs, single-base sequence differences among individuals) occur approximately every 1000 nucleotides. Since genes consist of thousands of nucleotides, every gene, and thus every molecular target, will on average contain multiple polymorphic differences from individual to individual. This suggests that genetic variation may be a much greater factor in the variability of toxic and receptor-mediated responses than has been realized previously. The potential to link these genetic variations with biological responses is at least as exciting as our new ability to simultaneously monitor gene transcripts and gene products via the new families of '-omic' technologies.
The recent understanding of the conservation of haplotypes (large blocks of DNA sequence that have not randomized due to meiotic crossing over during evolution) has vastly simplified the identification of genetically controlled biological responses. Markers of these genetic blocks are now available, which makes it possible to identify genetically controlled biological responses without the necessity of identifying the specific SNP responsible. This ability to determine if a given response has a genetic basis greatly reduces the effort of identifying SNPs that affect biological outcomes. Already, these techniques are being applied in the pharmaceutical industry to identify genetic variations associated with adverse reactions to drugs, as well as subpopulations with differences in desired pharmacological responses (see Cantor, 1999; Roses, 2002). Tangible benefits of such approaches are no longer beyond the horizon—the FDA has already approved several drugs indicated for individuals with a specific genotype or protein expression level (e.g., Gleevec, Herceptin, Prolastin, anti-hemophilic factor). Others are sure to follow.
Another important opportunity discussed at the workshop is the potential of proteomic approaches to identify specific forms of proteins that leak from cells when toxic responses cause cellular lysis or compromise membrane integrity. Proteomic technologies have the potential to identify a set of such markers that may allow monitoring of most of the individual cell populations in the body, thereby allowing many pathological processes to be monitored by analysis of these markers in blood. The International Life Sciences Institute has recently undertaken a consortium approach to identifying such new accessible biomarkers, based in part on the opportunities created by these advanced technologies (see 'Development and Application of Biomarkers of Toxicity' at www.ilsi.org/site_search/index.cfm).
In association with the genomic workshop, a congressional outreach effort was organized by the Regulatory Affairs Committee of SOT. I had the privilege to participate in this discussion on Capital Hill with congressional aides and Congressmen,1 including Congressmen Nethercutt and McDermott of Washington and staffers of Congressman Greenwood of Pennsylvania. This undertaking reflects the recognition by our Society that we need to keep Congressional and Executive leaders aware of the impact of advancing science on product development and safety. It was gratifying to see the interest of those present and to have the opportunity to discuss the potential for improved health through the application of new technologies and the need for novel approaches to assure the safety of new biotechnology-derived products. If potential improvements in product development and regulation are to become a reality, legislators must be informed and willing to support the necessary development processes.
It is clear that genomic technologies are already being used to develop new screening strategies and biomarkers of toxicity, to determine mechanisms of cellular and molecular perturbations, to identify genetic variations that determine responses to chemical exposure and sensitivity to toxic outcomes, and to monitor alterations in key biochemical pathways. How, then, can we expect genomic data to be integrated into regulatory safety evaluation and risk assessment? We must recognize that regulatory application of these methods requires careful evaluation and validation—and that this presents specific challenges in the case of highly multiplexed assays. The association of the endpoints measured with pathological outcomes, the accuracy and reproducibility of measurements, and the potential for 'false positive' and 'false negative' outcomes with respect to the proposed objective of each measurement must be understood. It will take some time to achieve this level of understanding for assays in which hundreds or thousands of measurements are made simultaneously. For this reason, multi-endpoint assays such as large-scale expression arrays and protein chips will likely be used first as discovery tools to define mechanisms and to identify key targets, with the development of simpler formats for routine regulatory application.
It is clear that the field is on the verge of a major transition that will employ the impressive technologies of the biological revolution to improve our approaches to risk assessment and product development. This will include improved approaches to screening and hazard identification, quantitative determination of exposure and response, determination of factors that convey individual risk or resistance, population and individual monitoring, and improved mechanistic understanding. It is a privilege to work in the field at such an exciting time.
1 Harry Salem and Denise Robinson organized this event, and Dave Eaton (SOT President at that time), Ray Tennant (Director of the National Center for Toxicogenomics), and I made presentations.
Cantor, C. R. (1999). Pharmacogenetics becomes pharmacogenomics: Wake up and get ready. Molec. Diag. 4, 287–288.
Roses, A. D. (2002). Genome-based pharmacogenetics and the pharmaceutical industry. Nature Rev. Drug. Disc. 1, 541–549.