PHY 562: Biophysics
Spring 2009
Lectures Mondays and Wednesdays, 1:30 –
2:50 PM in 303 Jadwin Hall
[First lecture, Mon 2 Feb 2009]
Evening problem sessions, Thursdays 6:30 –
8:00 PM in 280 Carl Icahn Laboratory
[First session, Thu 12 Feb 2009]
Professor William Bialek
237 Carl Icahn Laboratory
Princeton University
Note: if you would like to reach
me, the best bet is to contact my assistant, Ms Barbara Brinker (609-258-7014).
The teaching assistant for the
course will be Anand Murugan, anandm@princeton.edu.
Problem sessions will be staffed by a consortium of postdoctoral fellows,
including Justin Kinney, Pankaj Mehta, Thierry Mora, Stephanie Palmer, Greg Stephens and Aleksandra Walczak. Many thanks to all
of these folks for their help!
For several years I have been teaching PHY 562 at
Princeton, a Biophysics course for PhD students in Physics; IÕll teach it again
in the Spring of 2009. This is a Òcore
courseÓ for the Physics PhD program, and as such aims at students who have
mastered a fair bit of physics, certainly everything we throw at our
undergraduates, maybe a bit more.
On the other hand, a number of talented and ambitious undergraduates have
taken the course each year, and I think they enjoy it. To be fair to the goals of the PhD
program, I will keep the lectures at a fairly high level; to be fair to the
undergraduates, they will be graded on a different scale. If you have concerns about your
preparation for the course, or about the workload, please donÕt hesitate to
ask.
I have the ambition of turning my lecture notes
into a book, to be published by Princeton University Press. Thus, this is a combination of a course
web site and a preview of the book. I hope this is useful. You will find that problems are
embedded in the text, and current students should look to PrincetonÕs blackboard web site for weekly
assignments (see also the link above).
Sections of the text are listed below with planned dates for the
corresponding lectures, although there may be some slippage in the schedule. Also, the text has been formatted for
double sided printing; if you print out successive sections as the course
progresses, they should fit together (more or less) into a draft of the book.
I should emphasize that things are a bit
rough. The course changes every
time I teach it, and whole sections have never been written up. I try, in spots, to give a hint
(usually in different typeface) about what is missing. I have a deadline for the book in
March 2009, so hopefully there will be rapid progress. If something catches your eye as
problematic (or especially interesting), please donÕt hesitate to drop me a
note. If youÕd like to cite any of
the things you find here, I think you can use
W
Bialek, Biophysics: Searching for Principles. http://www.princeton.edu/~wbialek/PHY562.html
(2009).
Please note that, at the moment, my referencing
of the original literature is somewhat haphazard; the absence of references
thus is not a claim of originality!
Stay tuned for updates, or ask me specifically for links to the relevant
papers.
Chapter
0: Preface
This section provides some
perspective on the subject that might be useful as a general introduction to
the course. Here I also say a bit
about the evolution of the text itself, and acknowledge my debt to many
collaborators from whom I have learned so much.
Chapter
1: Photon counting in vision
Sitting quietly in a dark room,
we can detect the arrival of individual photons at our retina. This observation has a beautiful
history, with its roots in a suggestion by Lorentz in 1911. Tracing through the
steps from photon arrival to perception we see a sampling of the physics
problems posed by biological systems, ranging from the dynamics of single
molecules through amplification and adaptation in biochemical reaction
networks, coding and computation in neural networks, all the way to learning
and cognition. For photon counting
some of these problems are solved, but even in this well studied case many
problems are open and ripe for new theoretical and experimental work. We will look at photon counting not just for its intrinsic
interest, but also as a way of motivating some more general questions. The problem of photon counting also
introduces us to methods and concepts of much broader applicability. We begin by exploring the
phenomenology, aiming at the formulation of the key physics problems.
1.3 Dynamics of biochemical networks
1.4 Signal processing at the first synapse
1.5 Pointers to higher level issues
1.6 Coda
Chapter
2: Noise isn't negligible
The great poetic images of
classical physics are those of determinism and clockwork. Strikingly, life operates far from this
limit. Interactions between
molecules involve energies of just a few times the thermal energy, and
biological motors, including the molecular components of our muscles, move on
the same scale as Brownian motion.
Biological signals often are carried by just a handful of molecules, and
these molecules inevitably arrive randomly at their targets. Human perception can be limited
by noise in the detector elements of our sensory systems, and individual
elements in the brain, such as the synapses that pass signals from one neuron
to the next, are surprisingly noisy. How do the obviously reliable functions of
a life emerge from under this cloud of noise? Are there principles at work that
select, out of all possible mechanisms, the ones that maximize reliability and
precision in the presence of noise?
Are there ways in which noise can be productive, rather than a nuisance?
2.1 Molecular
fluctuations and chemical reactions
2.2 Molecule
counting and chemotaxis
2.3 Molecule
counting more generally
2.4 More about noise in neurons
and perception
2.5 Proofreading and active
noise reduction
2.6 Taking advantage of
fluctuations
Chapter
3: No fine tuning
Imagine making a model of all
the chemical reactions that occur inside a cell. Surely this model would
have many thousands of variables, so we would have thousands of differential
equations. If we write down this many differential equations with the
right general form but choose the parameters at random, presumably the
resulting dynamics (that is, what we get by solving the equations) will be
chaotic. Although there are periodic spurts of interest in the
possibility of chaos in biological systems, it seems clear that this sort of
ÒgenericÓ behavior of large dynamical systems is not what characterizes life.
On the other hand, it is not acceptable to claim that everything works because
every parameter has been set to just the right value—in particular these
parameters depend on details that might not be under the cellÕs control, such
as the temperature or concentration of nutrients in the environment. More
specifically, the dynamics of a cell depend on how many copies of each protein
the cell makes, and one either has to believe that everything works no matter
how many copies are made (within reason), or that the cell has ways of exerting
precise control over this number; either answer would be interesting.
This problem—the balance between robustness and fine tuning—arises
at many different levels of biological organization. Our goal in this chapter
is to look at several examples, from single molecules to brains, hoping to see
the common themes.
3.1 Sequence ensembles and protein folding
3.2 Computational function and ion channel densities
3.3 Associativity and long time scales in neural networks
3.4 Adaptation
3.5 Reproducibility in morphogenesis
3.6 The states of cells
Chapter
4: Efficient representation
The generation of physicists who
turned to biological phenomena in the wake of quantum mechanics noted that to
understand life one has to understand not just the flow of energy (as in
inanimate systems) but also the flow of information. There is, of course, some difficulty in translating the
colloquial notion of information into something mathematically precise. Indeed, almost all statistical mechanics
textbooks note that the entropy of a gas measures our lack of information about
the microscopic state of the molecules, but often this connection is left a bit
vague or qualitative. Shannon proved a theorem that makes the connection
precise: entropy is the unique
measure of available information consistent with certain simple and plausible
requirements. Further, entropy also answers the practical question of how much
space we need to use in writing down a description of the signals or states that
we observe. This leads to a
notion of efficient representation, and in this section of the course we'll explore the
possibility that biological systems in fact form efficient representations,
maximizing the amount of relevant information that they can transmit and
process, subject to fundamental physical constraints. WeÕll see that these ideas have the potential to tie
together phenomena ranging from the control of gene expression in bacteria to
learning in the brain.
4.2 Entropy lost and information gained
4.3 Does biology care about bits?
4.4 Optimizing information flow
4.5 Maximum entropy
4.6 Gathering information and learning
rules
Chapter
5: Outlook: How far can we go?
Appendices