I'll present here a brief history of my own field, culminating in a description of the scientific endeavor I am engaged in and its applications. I'm presenting my own field for several reasons:
By the late 1800s it was recognized that many of the processes that enable organisms to function are chemical reactions, and these chemical reactions are similar to those we can carry out in the laboratory. Two aspects of these reactions were poorly understood at this stage:
In order to try to understand these problems, biochemists looked carefully at individual reactions that occurred in biological systems, and they compared the mechanisms found in one cell type to those found in other cell types. They also looked for ways to generalize these reactions and understand how they were controlled.
Wilhelm Röntgen was a physicist at the University of Wurzberg. In 1895 he began studying cathode ray tubes, which were glass tubes containing metal targets onto which electric current could be directed. He discovered that the rays emanating from the tube could penetrate soft matter like paper or skin but were absorbed by lead, thick sheets of aluminum, or bone. He realized that these rays were a new form of radiation, and because of their unusual properties he called them X-rays. He and other X-ray pioneers used them primarily for imaging, as they are today. It was found that some metal targets, like tungsten, produce X-rays with a smooth spectral distribution; that is, the intensity of the X-rays varies smoothly as a function of wavelength or energy. Other metals, like copper, produce X-rays with a strongly peaked spectral distribution. In the case of copper, most of the energy is produced at a wavelength of 1.54 Å, due to a transition in which the incoming electron beam kicks a K-shell (innermost shell) electron up to an excited state, and it emits an X-ray of that wavelength as it decays back down to the K-shell.
Around 1912 researchers realized that the wavelength of the electromagnetic radiation that Röntgen called X-rays was the right size to produce diffraction off a crystalline substance. The first diffraction photograph was taken by Friedrich and Knipping in 1913, but the theory that explained the discrete spots on the photograph was developed a few years later by W.L. Bragg and Max von Laue. Experimentalists and theorists worked to produce a technique for studying the three-dimensional structures of molecules based on the positions and intensities of these discrete spots. It was recognized that a single stationary crystal would produce a very small number of diffraction spots not enough to unambiguously solve the structure of the molecule that makes up the crystal. Therefore a full understanding of the structure would require either that the experimentalist place many crystals in the X-ray beam, so that different spots would be excited by the incoming beam in different crystals, or that the experimenter rotate a single crystal about some axis while the data are recorded. This former method is known as powder diffraction, and can be used profitably to determine structures of small molecules. The latter method is single crystal diffraction, and can be used for small and large molecules.
The theory developed by Bragg, von Laue, and others is based on the fact that the electrons in the crystal are responsible for scattering the X-rays, and that constructive interference between the scattered beams will occur only in specified directions associated with planes of lattice points in the crystal. Each spot or "reflection" can be characterized by three integer index values h, k, and l; thus the relevant data in a diffraction experiment are the intensities I (h,k,l) of these spots. The relationship between the electron density and the intensities is given by
ρ(x,y,z) = (1/V) Σ F(hkl)exp(-2πi(hx/a+ky/b+lz/c)) 
where V is the volume of the unit cell; a, b, and c are its dimensions; and F(hkl) is a complex number whose length or modulus is related to the intensity by
|F(hkl)| ∝ (I(h,k,l))1/2 
Unfortunately the phase or direction of this complex number cannot be derived directly from the experiment. If the phases can be determined or guessed so that the F(hkl) values can be found, we can solve eqn. for ρ and use it to determine where the atoms in the unit cell are. The electron density will be higher around larger atoms than around smaller ones, so we can even identify which atom is where. Thus solving  is equivalent to knowing the full three-dimensional structure of the molecule making up the crystal. Early crystal structures were often determined by guessing a structure, calculating ρ(x,y,z) based on the guess, and seeing whether the magnitudes |F(hkl)| agreed with eqn.; if the agreement is good, the guess was correct. This is a productive technique provided the number of plausible guesses one needs to try is small, as is true with structures with fewer than (say) ten atoms; beyond that it gets impractical. Other indirect methods were developed in the 1920's through 1940's that extended the applicability of crystallography to structures of 150 atoms or more.
In the 1920's a theory was advanced that explained how catalysis worked in biological systems. As mentioned in strand I, biochemists knew that biological systems contained catalysts, and they invented the name "enzyme" to describe the biological catalysts. In 1926 J.B. Sumner crystallized an enzyme called urease and showed that the crystals contained protein. The notion that enzymes were proteins was disputed for another decade, but by about 1936 biochemists were in agreement: enzymes are proteins, and proteins were responsible for the large and selective increase in reaction rates observed in biochemical systems.
Note, incidentally, that not all proteins are enzymes: a very large fraction of the protein in the biosphere are structural proteins, which do not participate in biochemical reactions as catalysts, but rather hold physical structures in an organism together. Even among chemically active proteins, not all are enzymes. Some are carriers, like hemoglobin; some act as control elements, exerting effects on biochemical systems indirectly.
The understanding that enzymes are proteins led the way to an appreciation of their selectivity. Proteins were known to be large molecules, containing 800-100,000 atoms. If the architecture of the protein was such that it could incorporate one molecule into a crevice or channel (a so-called "active site") but could not incorporate another, closely related molecule, we could thereby explain the selectivity of enzymes.
This was an important breakthrough, but its applicability was limited: no one knew what the proteins looked like. Did they have a recognizable three-dimensional shape that could account for the specificity? Did proteins retain their shape and topology as they floated around in solution? Were particular protein architectures identifiable with particular classes of reactions? Biochemists tried to answer these questions in the 1930's through 1950's with tools like sedimentation that measured the overall shape of molecules. But it was clear that a real understanding of enzyme action would require an atomic-resolution picture of the enzyme, preferably with a reactant bound in the active site. Such a picture is unattainable with direct imaging techniques like electron microscopy or from indirect techniques like sedimentation.
Furthermore, the control issues had not been addressed. The mathematical and phenotypic concept of the gene was elegantly developed by Mendel and others in the late nineteenth and early twentieth centuries, but the way that genetic information was actually encoded in chromosomes was not understood. If cells "know" how to make enzymes to do chemical jobs in the cell, then it would be useful to understand how that knowledge was passed on through the chromosomes from one generation to the next. Furthermore, the problem of differentiation was still poorly understood.
Max Perutz was a student of W.L. Bragg at Cambridge in the 1940's. He realized the significance of protein structures, and embarked on an attempt to determine the X-ray crystal structure of a protein namely, hemoglobin, the oxygen carrier in red blood cells. His colleague, John Kendrew, began work in the same era on a slightly simpler protein myoglobin, the oxygen storage protein in muscle. Perutz, Kendrew, and their coworkers faced three problems. The molecules they were studying were substantially larger than those addressed by crystallographic techniques previously, and therefore produced more spots that needed to be measured. For each spot it was necessary to determine the index values (h,k,l) and measure the intensity I(h,k,l) for use in eqns.  and . The second problem they faced was the phase problem, i.e. the fact that we can experimentally determine I(h,k,l) and derive from it |F(hkl)|, but we cannot directly determine the phase angle associated with it. For a protein like hemoglobin, with over 2500 non-hydrogen atoms, the conventional phasing techniques developed for small molecules were doomed to failure. So Kendrew and Perutz had to develop, from scratch, entire toolkits of experimental and analytical techniques to determine phases. The final problem was that the crystals were not as ordered as small-molecule crystals. Most small-molecule crystals are ordered well enough that individual atoms are fully distinguishable. Protein crystals, particularly in those early days when modern protein-purification techniques were still in the future, do not display this degree of order. So even with perfect intensity measurements and perfect phase estimates, the best Perutz and Kendrew could hope to do was get a rough architecture of their proteins. Higher resolution would require more order in the samples.
Nonetheless the Cambridge group persevered, and in 1957 Kendrew published the structure of myoglobin, followed a few months later by Perutz's hemoglobin publication. They solved the data collection problem by taking several X-ray exposures from each of thirty or more crystals and estimating the spot intensities by eye. They solved the phase problem by developing a method called "isomorphous replacement", in which heavy atoms like mercury are soaked into grown crystals of the macromolecule and the diffraction pattern of the derivatized protein is compared mathematically to that of the underivatized protein. Many man-years of theoretical development and many man-months of data reduction eventually yielded phase estimates accurate enough to enable the structure determination. They did not initially solve the disorder problem: they lived with it.
Perutz and Kendrew showed, consistent with results from other techniques, that these proteins were predominantly a-helical, i.e.they contain large segments of structure where the atoms of the protein array themselves along helixes, together with their arrangement in space relative to one another. The area around the iron atom (where the oxygen binds) was of particular interest, and from the very first publication the Cambridge scientists began to analyze their structure to understand the dynamics of oxygen binding.
Over the next few years the techniques pioneered by Kendrew and Perutz were refined and extended to many other proteins. The following methodological improvements were developed: faster and more automated data collection techniques; additional heavy-atom compounds for isomorphous replacement; a technique that uses the structure of a known protein to provide phase information for a related protein; and ways of making better-ordered protein crystals. So many of the problems that made the globin structures barely soluble got easier and easier. Meanwhile, many other protein structures were determined. Many were not helical; in fact, the majority of proteins possess a mixture of large-scale structural elements, known as helixes, b-sheets, and coils. The globins' all-helical conformation is a relative rarity. In the 1970's experiments were conducted to demonstrate that the structure determined by X-ray methods is mostly preserved in the non-crystalline, soluble state of a protein; the structures determined by crystallography were not artifacts of the method, but reflected biological reality.
In the 1940's the role of deoxyribonucleic acid in controlling what a cell does and how information is passed on to cellular progeny began to be elucidated. DNA constitutes the infectious element in viruses; the base composition of DNA is invariant in tissues from a single species, but varies from species to species; and the amount of DNA per cell is directly related to the complexity of the organism. By 1950 the crucial question was not whether DNA was the genetic control element, but how it coded genetic information and how it replicated. The determination of the structure of DNA was crucial to this understanding. James Watson and Francis Crick, in Max Perutz's and W.L Bragg's laboratory at Cambridge, analyzed data from a method related to crystallography known as fiber diffraction. Their analysis showed that DNA folds up in a stable double-helical conformation, and replication consists of uncoiling one helical strand and synthesising a new one. The information content is embodied in the sequence of nucleic acid building blocks, or bases, in either strand. The DNA message is translated into definitions of protein sequence through a ribonucleic acid (RNA) intermediate; the DNA-to-RNA messaging is called "transcription", and the RNA-to-protein messaging is called "translation". The genetic code itself, i.e. the relationship between particular DNA sequence elements and the corresponding protein sequence elements, was elucidated in the 1960's.
The last three decades have seen an explosion in our understanding of how these messages work and how they are controlled. The enzymatic control of the transcription and translation processes is now fairly well understood. The differentiation problem, in which identical DNA messages in every cell of a higher organism produces different protein-synthetic effects depending on the nature of the cell and its position, is substantially understood, although many details remain to be worked out. We have even arrived at a partial understanding of how a DNA molecule's structure changes as it interacts with RNA and proteins. We now know, for example, that proteins called "chaperonins" are sometimes involved in ensuring that proteins fold up correctly as they emerge from the translation machinery.
Thus by the 1980's we began to have ways of determining the details of protein structures and how those protein structures were coded for in the first place. The relevance of these efforts was empowered by the recognition that it was possible to design small molecules that bind to proteins, and to think of ways to alter protein structures themselves, to produce useful products. The first of these recognitions has led to structure-based drug design; the second, to protein engineering.
In previous centuries and for much of this one, drugs were derived directly from natural sources. The thought that one might directly apply the techniques of synthetic organic chemistry to produce effective "ethical pharmaceuticals" (the conventional term for the products of the legitimate pharmaceutical industry) arose in the 1930's and 1940's. Even in the postwar era the pathway toward the development of a drug tended to be somewhat unsystematic. In some cases a natural product that had a known beneficial effect was identified chemically, and then modified only slightly by the synthetic chemists to produce one that had higher potency, less severe side effects, or (alas) a lower likelihood of a patent conflict. But the process depended on a reasonably potent starting product, and the modifications performed by the chemists were often randomly launched rather than systematically planned. The understanding of the molecular basis of disease, particularly in cases where the absence or overactivity of a protein are causally connected to the disease, increased the chance for a systematic approach to the design effort. But even up to the present, a large fraction of drug-development research consists of synthesizing large numbers of compounds nearly at random and testing them for possible beneficial effects. The technique of "high throughput screening" involves generating huge numbers--often in the hundreds of thousands-- of related compounds and efficiently testing all of them against a model for the disease state of interest. If any positive responses arise, the chemists then go back to identify which of the compounds they had shoved through the screen actually produced the effect.
In structure-based drug design a more systematic approach is taken. A protein is identified through conventional biochemical and cell-biological approaches as being associated with a particular disease. The three-dimensional structure of the protein is determined by X-ray crystallography. A structural biologist then examines the structure to identify what kinds of small molecules might fit into the active site of the protein. Relying on both geometry (how well will it fit?) and electrostatics (does it have charges and dipoles in the right places?), the scientist designs molecules that are likely to interact well. He or she then attempts to convince the synthetic chemists to make those ligand compounds, and tests them--both for their biological activity via an assay, and for their structural relevance by determining the structure of the protein-ligand complex. The structure of the complex can be used as a guide to build a second generation of candidate compounds that will bind still more tightly. After a few rounds of these iterative improvements, each of which involves both a bioassay and a structure determination, the proto-drug might bind a thousand times better than the initial ligand compound or "lead candidate". At that point the designers begin to concern themselves with other properties of the compound, such as its solubility and the likelihood of side effects. In the end many small changes will have been made in the lead candidate, and most of the changes will have been guided by structural studies. Most of the anti-AIDS drugs now on the market have been developed in this manner, and increasing numbers of drugs associated with bacterial infections, cancer, and viral diseases are being developed this way.
In protein engineering the focus is on modifying biological proteins themselves rather than using them as fixed templates for the design of small molecules. Numerous proteins have industrial applications-- proteases (proteins that chew up other proteins) are used in laundry products, amylase is used to break down starch, xylose isomerase is used to convert glucose (which is cheap) to fructose (which is not), and so on. These proteins have usually evolved in ways that do not suit them perfectly to a human-directed bioreactor environment: they operate optimally at a pH or temperature that is distant from those the experimenter wants to use, or they give rise to unwanted side-reactions, or their turnover rate is not high enough. Protein engineering involves identifying and effecting small changes to the structure of a commercially interesting protein that will modify its behavior. Increasing the thermal stability of enzymes used in laundry products, changing the pH optimum of xylose isomerase, and altering the substrate specificity of proteases are all projects that protein engineers have undertaken in the last fifteen years with varying degrees of success. But the technique depends on determining structures of the unmodified and modified proteins throughout the design process.
The improvements in methods in crystallography reached a point where stronger and energy-tunable X-ray sources were needed. These became available in the 1970's and 1980's in the form of electron storage rings, from which very intense, tunable X-ray beams could be derived. These are now important tools available to the majority of serious macromolecular crystallographers. Beamlines at these storage rings, or synchrotrons, are the facilities at which crystallographers and other researchers get access to the tunable and high-intense X-ray beams they need for their research. In the early days of electron storage rings, a few beamlines became dedicated to crystallographic experiments, and the number has slowly grown to the point where there are about twenty crystallography beamlines at the five operating electron storage rings in this country and another thirty or so outside the US. At each of these facilities, scientists can obtain as much X-ray data in a visit lasting two to seven days as they could get on their home X-ray sources in a month or more. At the "third-generation" facilities-- the European Synchrotron Radiation Facility in Grenoble, France, the Advanced Photon Source in Argonne, Illinois, and SPRing-8 in Japan-- the increase in effective throughput is especially striking. The tunability of the source--that is, the ability of the scientist to choose the wavelength of X-rays that will be used in a particular experiment--means that experiments become possible at storage rings that are impossible at home sources. At this point close to half the structure determinations published in prestigious journals like Science and Cell are derived at least in part from storage-ring data. So the storage ring is a crucial part of the collection of tools available to the structural biologist.
Macromolecular crystallographers in industry, engaged in drug design and (less often) protein engineering, participated in this movement toward storage-ring-based data collection in the 1980's, but to a lesser degree than academic crystallographers. Their projects tended to be slightly less "cutting-edge" from a technical perspective than those of academic crystallographers, and they wanted control over the intellectual property rights to the information derived from the structure determinations. For both of these reasons, industrial crystallographers tended not to get large fractions of the storage-ring beam time. So in the mid-1980's a group of industrial crystallographers, led by Keith Watenpaugh of Upjohn and Noel Jones of Eli Lilly, organized a group of their colleagues at other pharmaceutical and chemical companies to build and outfit a crystallographic facility specifically tailored to their needs and schedules. After substantial discussion (mostly among the company lawyers) this group formed the Industrial Macromolecular Crystallography Association (IMCA) to realize the plans of Drs. Watenpaugh and Jones. The IMCA Collaborative Access Team (IMCA-CAT) was organized in 1991 with Illinois Institute of Technology serving as the contractor charged with building and maintaining the IMCA facilities at the Advanced Photon Source. IMCA-CAT spent several years buying and building equipment for its facilities, and began experimental operations in 1997. I came to IIT as associate director of IMCA-CAT in December 1995 and was appointed director in mid-1996. IMCA decided in 2001 that it needed a full-time director, so last summer the IMCA Supervisory Board appointed Lisa Keefe as director of IMCA-CAT and named me Chief Scientific Officer. IMCA-CAT now provides high-quality, high-reliability diffraction data for the twelve companies of the IMCA consortium and for IIT-based research, some of which is government-sponsored. Generally this is a productive arrangement for all concerned.