Biology 403/504, Second Lecture
Thursday 24 January 2008
- Why we care about thermodynamics in biochemistry
- The laws of thermodynamics
- Thermodynamic properties
- Entropy in solvation and binding to surfaces
- Free energy
- Free energy and equilibrium
- Free energy as a source of work
- Coupled reactions
- ATP as an energy currency
- Other high-energy compounds
- Dependence on concentration
Why we care about thermodynamics in biochemistry
Much of what we will study in this course involves reaction
pathways—the conversion of one compound to another and another, onward
toward some final product that gets used in a structural or functional way
in an organism. In order to understand whether the reactions that produce
the intermediate and final products will proceed, we need to know whether
the reactions give off free energy or require free energy under physiological
conditions. If they give off free energy, they will proceed without prompting;
if they require energy, we will need to understand where the energy to drive
the reaction comes from. This is the stuff of thermodynamics.
We observed on Tuesday that thermodynamics alone will
not tell us whether a reaction will proceed in a reasonable time frame.
If the activation energy that separates reactants from products is high
enough, the time required for a system to come to equilibrium will be long
enough that the reaction will not play out within the lifetime of an organism.
It is the job of biological catalysts—enzymes—to reduce
this activation energy enough to make the kinetics of a reaction practical.
Kinetics involves an understanding of energy, just as thermodynamics does.
Thus the application of energetic considerations in biochemistry involves
more than thermodynamics—it involves kinetics,
and the ways that enzymes modify kinetics. But for today
we'll focus on the energetics of equilibrium, i.e. thermodynamics.
The laws of thermodynamics
There are four fundamental laws of thermodynamics, which
for historical reasons are known as the zeroth, first, second, and third
laws. The ones of immediate relevance to biochemistry are the first and second
laws. These can be articulated in a variety of ways, but for our purposes:
These laws presuppose some understanding of what energy
and entropy are. The definition of energy is something you have encountered
in some detail in other courses, but for this purpose we will think of it
as the capacity to perform work. Entropy can be defined as the amount of
disorder in a system, such that a system that is free to assume many states
has more entropy than a system that is restricted to a smaller number of
- The first law of thermodynamics says that the energy of a closed
system is constant.
- The second law of thermodynamics says that the entropy in a closed
Note the emphasis on the notion of a "closed system"
in the thermodynamic laws.
An organism is not a closed system: it interacts with its surroundings.
Therefore the energy associated with a cell or a
larger biological entity may increase, provided that the energy comes from
Ultimately the source for biological energy is the sun.
Similarly, the entropy of a cell or organism may decrease, provided that,
at the same time, the entropy of the surroundings increases by a larger amount
than the decrease in the entropy of the biological entity. We can think of
biology as the study of entities that decrease their local entropy while
leaving behind a trail of increased entropy.
Enthalpy H is a concept closely related to total
H = E + PV
where P and V are pressure and volume respectively.
In biochemical systems the volume within which reactions occur is usually
(but not always) constant,
and the pressure within aqueous solutions is locally fixed.
There are circumstances where pressure varies in biological systems.
Certainly pressure varies while gases are passing in and out of
permeable membranes like those of the human lung,
and within a 100m long strand of aquatic seaweed there
will be an appreciable difference in pressure between the top and the bottom.
But in most biochemical systems both pressure and volume
are effectively constant,
so the difference between entropy and energy is insignificant.
Energy and enthalpy are examples of extensive properties,
i.e. their values are proportional to the
amount of material under consideration. Thus if N molecules have,
in aggregate, an energy E and an enthalpy H, then 2N
molecules with the same properties will have energy 2E and enthalpy
2H. By contrast, temperature and pressure are not proportional to the
number of molecules present, and as such are described as intensive
In practice we tend to work with an intensive version of energy,
enthalpy, and similar properties, by measuring, for example,
the energy per molecule or the energy per mole of a substance.
Thus we will characterize the enthalpy of a substance
in units of kJ/mole, i.e. the number of kilojoules of enthalpy per mole of
substance. This has the obvious advantage of being an inherent property
of the molecular species under consideration, rather than something we have
to measure separately for every bundle of that substance.
Energy, enthalpy, and entropy are state variables;
they do not depend on how a system was created.
The path in getting from
one state to another does not change the ending values of these properties,
whereas other properties (like work and heat) do depend on the path.
Work and heat, then, are not state variables.
Recall from physics courses that the Joule is the MKS unit of energy, and
is 1 kg-m2/s2.
Two convenient units in biochemistry are the kilojoule/mole (kJ/mol)
and the kilocalorie/mole (kcal/mol). A kilojoule
is 103 Joules.
A kilocalorie is the amount of energy required to
increase the temperature of one kg of water at 4 deg C
(277.16 K) by one degree K.
This turns out to be 4.184 kJ, so 1 kcal/mol = 4.184 kJ/mol = 4184 J/mol.
In almost any thermodynamic discussion, the appropriate unit of temperature
is the Kelvin, which used to be called a degree Kelvin.
The abbreviation for a Kelvin is K. Occasionally I will lapse into the
old-fashioned nomenclature and mention "deg-K" or "degree Kelvin" when
I mean "Kelvin." You're free to jeer at me if I do that.
I may say something about a "degree," by which I mean a Kelvin. This isn't
actually wrong, so don't jeer if I do that.
Recognize that the size of the Kelvin is the same as that of the
degree Celsius; the only difference is the starting point of the scale.
Zero Kelvin is the theoretical minimum of the temperature scale.
Zero Celsius is defined as 273.16 Kelvin; it is the temperature (approximately)
at which water freezes at 1 atmosphere of pressure.
Occasionally we may work with the electron-volt as a unit of energy.
Since a volt is a Joule per Coulomb of charge, and one electron carries
a charge of 1.602*10-19 Coulombs,
an electron-volt is 1.602*10-19 Coulomb * 1 J/Coulomb =
1.602*10-19 J = 1.602*10-22 kJ.
A mole is 6.022*1023 molecules: it is really not a unit
at all, but rather just a convenient way of counting large numbers of
atoms (or molecules, or electrons, or graduate students).
Therefore a kJ/mol is a kJ divided by this large number of objects,
so 1 kJ/mol = (1/6.022*1023) kJ = 1.661*10-24 kJ.
Thus 1 kJ/mol = 1.661*10-24 kJ / (1.602*10-22 kJ/eV) =
1.037 * 10-24+22 = 0.01037 eV.
Conversely, 1 eV = 96.4 kJ/mol.
We will soon discuss the fact that the hydrolysis of the high-energy
phosphate bond in adenosine triphosphate has a ΔGo
of about 33 kJ/mol; we can see that this is about 0.34 eV.
We've already said that entropy is a measure of the
disorder in a system. Entropy turns out to be proportional to the logarithm
of the number of degrees of freedom Ω in a system:
S = k ln Ω
where k is Boltzmann's constant, 3.4*10-24 cal/K,
or 1.38*10-23 joule/K.
We often measure entropy in entropy units eu = 1 cal/K. The gas constant
R is the product of Avogadro's number N and k, so if the entropy
of a single molecule is S, then the entropy of a mole of the same kind
of molecules will be NS = R ln Ω.
The second law of thermodynamics says that in general
the entropy of a closed system will increase, i.e. that for the most part
the universe tends toward a larger number of degrees of freedom or a larger
amount of disorder.
The entropy of single molecule can be characterized by
statistical-mechanical methods if the molecule is simple enough.
The following table,
adapted from table 2.1 in Zubay's Principles of Biochemistry,
breaks the entropy of liquid propane into translational, rotational,
vibrational, and electronic components:
This pattern, in which most of the entropy is translational and rotational,
is typical of biomolecules.
By contrast, the enthalpy in a biomolecule is
usually dominated by electronic properties.
Translational entropy depends primarily on
(3/2)RlnMr, where Mr
is the molecular weight.
In a dimerization reaction the total entropy decreases,
because Mr doubles, but the logarithm of it does not
double—it only increases by ln 2.
Thus the translational entropy goes down.
|type of entropy
Rigidity decreases entropy, because rigid structures
cannot rotate as freely and often cannot vibrate as freely.
Entropy in solvation and binding to surfaces
What happens when molecules go into solution? The solute molecules usually
undergo an increase in entropy, because they become free to dissociate from
one another, and in the case of ionic solutes the cations can separate from
the anions. On the other hand, the solvent molecules frequently become more
organized in the vicinity of the solute molecules than they had been before
the introduction of the solute, so their contribution to total change in entropy
is frequently negative. The net effect is often slightly negative, i.e. the
solution has a slightly lower entropy than the separated components.
When an apolar molecule is added to water, the water molecules often form
a micelle around the foreign molecule. This micelle is highly ordered, so
the entropy of the system decreases.
Many biochemical reactions involve binding of small molecules to a surface,
e.g. the surface of a protein.
In inorganic chemistry the binding of small molecules to surfaces often
involves a decrease in entropy because the molecules
binding to the surface lose rotational degrees of freedom.
But in biochemistry the loss in rotational freedom is more than
compensated for by the increase in entropy associated with the release of
water molecules from the protein surface.
Thus the binding of metabolites to a protein is often entropically favored.
Josiah Gibbs articulated the concept of free energy (sometimes called
Gibbs free energy), which is related to entropy and enthalpy by
G = H - TS
The change in free energy when a reaction occurs is
ΔH - TΔS
assuming the temperature does not change. Temperature in a biochemical system
in general changes very slowly, so this is a reasonable assumption.
Gibbs was able to show that a chemical reaction will occur spontaneously
if and only if the change in free energy is negative:
ΔG < 0
For the most part we will analyze biochemical reactions in terms of their
spontaneity and therefore in terms of whether ΔG < 0.
We can compute ΔG per mole for a wide variety of compounds.
A useful formulation is that of the standard free energy of formation
of a compound
which is the difference between the
free energy of the compound in its standard state and the total free energies
of the elements of which the compound is composed.
This table (again adapted from Zubay) contains some examples of
ΔGof values for metabolites:
We can use these values to calculate the overall change in standard free
associated with a biochemical reaction.
There are some tricks and special cases to consider.
But the concept is straightforward:
given the known values of ΔGof
for the reactants and products in a reaction,
we can calculate the overall change in standard free energy in a reaction
by adding up the ΔGof values for the
products and subtracting the ΔGof
values for the reactants.
The ΔGof values are generally negative,
so we'll be subtracting a negative number from another negative number. If
the total comes out negative, the reaction is spontaneous; if it comes out
positive, the reaction is not spontaneous.
|hydrogen ions, 10-7M
Free energy and equilibrium
Gibbs established the relationship between ΔGo
and the equilibrium contant of a reaction:
where Keq is the equilibrium constant of the reaction. In a bimolecular reaction
aA + bB -> cC + dD
this equilibrium constant is
Keq = ([C]c[D]d
Thus if a reaction is just barely spontaneous,
i.e. ΔGo = 0, then
Keq = 1.
If ΔGo < 0 then Keq
> 1, i.e. there will be more products than reactants at equilibrium.
If ΔGo > 0
then Keq < 1,
i.e. there will be more reactants than products at equilibrium.
Reactions in which ΔG o < 0 are called exergonic;
reactions in which ΔGo > 0 are called endergonic.
Free energy as a source of work
The change in free energy tells us the maximum amount
of useful work that can be derived from a biochemical reaction. If
ΔGo is negative,
then the largest amount of useful work that could
be extracted from the reaction is -ΔGo.
Some of that energy will go into heat, though,
so the actual amount of work we can get will always
be less than -ΔGo.
Organisms use this work in at least three ways:
This last case is crucial to many biochemical pathways and will be considered
in greater detail.
- To move objects, as in muscle contraction and flagellar swimming.
- To move molecules against concentration gradients
and ions across potential gradients.
- To drive otherwise endergonic reactions either by direct coupling or by
depleting concentrations of reactants enough to make the reaction favorable.
In some cases, a single enzyme catalyzes two successive reactions, the first
of which is exergonic and the second of which is endergonic. In that case,
in effect, the overall reaction happens in one shot, with the energy from
the exergonic part of the sequence driving the energonic part. If the overall
ΔGo < 0 for the pair of reactions,
the products will be produced.
In other cases, two reactions may not be spatially coupled. Instead, the
fact that the first reaction produces a high concentration of its product(s)
results in a high concentration of the reactant(s) for the second reaction.
Based on the definition of Keq,
this imbalance in concentration changes the value of ΔG
enough to render the second reaction possible.
ATP as an energy currency
Some reactions encountered in biology are exergonic (have an overall negative
ΔG), whereas some are endergonic (positive ΔG).
The endergonic reactions in general are coupled with exergonic reactions
so that they can proceed.
In order for this approach to work, the cell needs a ready supply
of high-energy compounds, the modifications of which can be used to drive
In general the reactions involve coupling the hydrolysis of a high-energy
bond with some endergonic process. Most of the high-energy bonds are
between phosphorus and oxygen atoms, and the reactions involve hydrolyzing
this phosphorus-oxygen bond:
R-O~P=O + HOH -> R-O-H + PO4-3
The most common compound involved in this process is adenosine
It can be hydrolyzed at either the gamma phosphate (the one farthest from
the ribose ring) or at the beta phophate (the one in the middle).
In the former case, about 7.8 kcal/mol (32.6 kJ/mol)
is released by the hydrolysis:
ATP + H2O -> ADP + Pi
where Pi is a standard abbreviation for inorganic phosphate,
A similar amount is released in hydrolysis at the beta phosphate:
ATP + H2O -> AMP + PPi,
where PPi is a standard abbreviation for inorganic pyrophosphate,
However, pyrophosphate hydrolyzes into two molecules of ordinary phosphate,
with the release of a similar amount of energy. Appropriately coupled,
the hydrolysis of ATP to AMP and two equivalents of Pi can
therefore yield more than 15 kcal/mol of energy—enough to drive almost all
conventionally-encountered biochemical reactions.
ATP thus acts as a kind of energy currency: a means of storing energy
that can be tapped for driving endergonic reactions to completion.
The energy has to come from somewhere: it comes from the creation of ATP,
with its high-energy phosphate bonds, from lower-energy substituents,
using various exergonic reactions as drivers.
We can think of the resting concentration of ATP in a cell as the equivalent
of a roll of quarters that the cell can spend when it needs energy.
Each ATP molecule acts as a single quarter when it's hydrolyzed to
ADP; it acts as a pair of quarters when it's hydrolyzed to AMP.
Most of the purchases the cell needs to make are for prices either just
below $0.50 (ATP -> AMP) or just below $0.25 (ATP -> ADP), so it's useful
currency for the cell to carry around.
None of the cellular vendors gives change,
so if we use our quarters to buy $0.03 worth of merchandise at a time,
it's not very cost-effective,
but in buying items that cost $0.24 or $0.48, they're pretty efficient.
When the cell runs out of quarters,
it needs to go to the metabolic bank and get some more.
Other high-energy compounds
There are other compounds employed as energy-storage entities in cells.
None of the others is as plentiful or as widely-used as ATP, but they
play significant roles in certain pathways.
Each of these compounds contains
at least one high-energy phosphorus-oxygen bond, just as ATP does,
so the mechanisms are similar to those found in ATP hydrolysis.
But the specific ΔG values for each of these phosphate
compounds differs from that of ATP,
and as such they turn out to be more efficient in driving
particular classes of reactions.
So the cell may be carrying around several rolls of quarters (ATP molecules),
but it also carries around one roll of
40-cent pieces (creatine phosphate), one roll of 35-cent pieces
(phosphoenol pyruvate), and the like. Since the vendors don't make change,
creatine phosphate is a useful compound to carry when making 38-cent
Dependence on Concentration
Is this bookkeeper's perspective altogether meaningful?
It is if we recognize the limitations to it.
A fundamental principle of chemical thermodynamics is that the free
energy difference in a reaction depends on the concentrations
(or, more precisely, the activities) of the products and reactants.
Therefore a reaction that would have a negative ΔG
if all the products and reactants had equal concentrations may have
a positive ΔG if there are high concentrations of products
and low concentrations of reactants when we begin to examine the system.
Specifically, we write this dependence on concentration as
ΔGo + RTln [products]/[reactants]
where, as we have discussxd,
ΔGo is the standard free energy
of the reaction, viz. the free energy associated with a condition in
which the concentrations of products and reactants are all initially 1M.
Thus if [products]/[reactants] is greater than one, the term on the
right will be positive and will make ΔG more positive,
or less negative, than the standard free energy ΔGo.
Note that if the concentrations of products are all 1M, then
[products]/[reactants] = 1 and ln[products]/[reactants] = 0, so
ΔG = ΔGo.
Thus it makes sense to define the standard free energy in these terms.
It is often impractical to make measurements in systems where the
starting concentrations are all 1M, but we generally extrapolate to those
conditions without difficulty.
The way this concentration dependence affects our notion of energy currency
is that the "value" of a high-energy compound depends on its concentration
and the concentrations of the other molecules participating in a reaction.
Thus the reaction
ATP + X → ADP + Pi + Y
in which the free energy derived from ATP hydrolysis is used to drive
the conversion of X to Y, may have a ΔGo
that is moderately positive in a particular cellular situation.
But if the concentrations of ADP, X, Y and ATP are such that
ln[products]/[reactants] is negative,
then we still may find that ΔG is negative and the reaction
will proceed to the right.
Often [ATP] > [ADP]. This tends to increase the sponteneity of these
ATP-driven reactions: [ADP]/[ATP] < 1, so the logarithm of that
ratio will be negative.