An Apology for Protein Folding

We scientists tend to speak about our work in terms of trying to solve certain problems; for instance, you might have heard about the protein folding problem or (more exotically) the quantum gravity problem.  But how do we justify working on these problems in the face of problems like hunger or poverty? – Aren’t those problems more important, or at the very least, more pressing?  And if scientists are so good at solving problems like protein folding, then why aren’t they applying their tremendous powers to trying to solve problems like hunger? – How many more people could we help (and save) if we were doing that instead?  Is it selfish to be a scientist?

This is a question that has often weighed on me.  I’ve been thinking about it more recently with the arrival of a new roommate, a human biology major at Stanford, whose concern about global public health is nothing short of inspiring.  She represents science in action; I am science of the ivory tower.  She represents science in the service of man; I am science in the service of… I’m not quite sure what.

Questions like these disturb many scientists because they create cognitive dissonance between our (generally) progressive liberal world-views and the work which occupies our day-to-day lives.

The prototypical scientific apologia goes: “Science leads to technology.”  For instance, the discovery of nuclear spin and magnetic resonance led to one of medicine’s most significant imaging modalities (MRI).  Our hope is to blow the  critic away with the Protean force that is Western science of the past two centuries.  And then the apology ends by bringing it back to one’s own research: “For all you know,” says the defensive scientist, “Understanding protein folding could lead us to the cure to Alzheimer’s.”  Take that.

But there are multiple flaws with this apology.

First of all, not all technologies are good (e.g., the discovery of the atomic nucleus led to the atomic bomb).

Secondly, science is science and technology is technology.  They’re not the same thing.  You wouldn’t say (or at least, I wouldn’t say) that solving world hunger or narrowing the rich–poor gap is good because it would lead to a stronger economy.  It’s good just because it’s good – in other words, it’s good because it’s consistent with our values.  In other words, this apology is not defending science on its own terms, but only because of what it can lead to.  Moreover, this apology is really only defending the subsection of science that leads to technology (e.g., applied science and engineering).  The simple fact of the matter is that most scientific discoveries do not lead to technologies; only a lucky (or unlucky, for that matter) subset do.

The road-less-taken apology (which I sometimes make) invokes the universality of Wonder as a quintessential human emotion. Science speaks to us at a basic level – it doesn’t feed the hunger of our stomach, but it nourishes the hunger of our minds.  “And isn’t that important too?” I say.  “As humans, could we bear to live in a world in which only our basic needs are met?”  Taking the apology back to the scientist’s own research: Protein folding is just so incredible – it appears to get so close to violating the laws of thermodynamics, but then it makes you do a double-take.  “It would theoretically take the lifespan of multiple universes for a protein to locate its native conformation amidst all the other possible conformations it could exist in… and yet the proteins we find in Nature only need a few milliseconds to do this.  Proteins are only held together with the weakest of forces – constantly at the precipice of falling apart – and yet they make spider silk as strong as steel and they give our cells the strength to exert mechanical forces too.”  Trying to figure out protein folding combines the satisfaction of solving a really difficult puzzle with the entertainment of watching an action-packed movie.

But this apology is far from perfect too.  Wonder is personal.  Not all of us share the same sense of wonder.  More importantly, this apology even reeks slightly of an intellectual hedonism – it’s saying that science is almost like a luxury product that those who do not need to be preoccupied with the fundamentals can indulge in.

For me at least, it sometimes seems that the scientist’s condition is existentially schizoid: oscillating from the high of being able to justify oneself to oneself and the whole world… to the low of not being able to justify any of what one does to the world, let alone to oneself.

But perhaps, this is not the plight of only the scientist, but all of us.

Three types of scientific claims

Scientific claims based on experimental data tend to be structured in one of three ways.  The three arguments are as follows:

1)  In our experiment on system X, we measured outcome Y.  Y is one and the same as the property of interest, P.

2) In our experiment on system X, we measured outcome Y.  Our model, M, can map Y to P.  By applying M to Y, we determine P.

3) In our experiment on system X, we measured outcome Y.  We are interested not so much in Y but a particular property of X, namely, P.  We have a model, M, that maps P to Y.  We determine P by finding the particular model which when given that P reproduces our measured value of Y.

Stated in a sort of math-y way, you can denote argument 3 as M(P) -> Y; argument 2 as M(Y) -> P; and argument 1 as P.

Stated in words, I would say that argument 3 reconstructs from measurements; argument 2 determines P from measurements; and argument 1 identifies or finds P from measurements.  Note how slightly different words reflect large differences in the structure of the claim.

An astute reader probably noticed that the way I ordered these three claims is not accidental, and what I am proposing is a hierarchy.  I don’t think this hierarchy is too radical; it is really just a systematic way of asking “How dependent is my claim about a property of interest on a model?”  Most scientists would agree that experimental evidence is stronger if it is directly pertinent to a property of interest as opposed to relying on a model to extract that property.  The main point I want to make here though is that as we move forward in science, we have been tending to move lower down into the hierarchy, which is actually a little bit scary.  Or put another way: as we become interested in more exotic, minute, particular aspects of Nature, our ability to observe, detect, and measure it becomes weaker – requiring us to rely more heavily on models to make scientific claims.

The early chemistry breakthroughs of the 17th and 18th century were mostly formulated in terms of argument 1.  Take for instance the ideal gas law (pV = nRT), a staple of many of our intro chemistry classes.  Boyle was interested in the relationship between a gas’s pressure and volume.  He was able to measure the pressure and volume (= pressure, Y = volume) – which also turned out to be one and the same as the properties (P) he was interested in.

Chemistry, as it developed in the 19th and 20th century, has become essentially the science of molecules.  Molecules, of course, are very small, and for all intents and purposes, cannot be observed directly.  However, despite their elusive nature, molecules (unlike people) are generally good at conforming to models, and so we have been able to learn a great deal about the properties of molecules by acquiring measurements that map onto those properties.  For instance, we often need to know the concentration of a certain molecule.  As simple as this type of information might seem, it cannot be obtained directly since we can’t count molecules and take a tally.  Instead, we pass light through a sample containing molecules, observe  how much the light got attenuated (Y = attenuation of the light), and using a fairly simple model, M = Beer’s law, we can relate the attenuation to the concentration (= concentration).  For the most part, chemists are very comfortable with claims of this sort, as they must.  Determining the structure of a molecule by NMR, or the rate constant of a reaction are all of this sort.  We are of course beholden now to a model, laden with its assumptions and range of validity.

I believe that a shift has slowly been taking place under our feet, and increasingly, more scientific claims that we are making in the 21st century fall into the 3rd category of argument.  The best example I can think of this shift is X-ray crystallography.  In this experiment, we record the pattern formed when X-rays diffract off a crystal comprised of a molecule of interest, and use the data to determine the 3-D structure of the molecule.  Here, Y = the intensity of X-ray reflections, and P = the 3-D structure of the molecule.  In X-ray crystallography of the 20th century, M comprised of the fairly rigorous science of Fourier optics, which assures that the amplitude of a scattered wave is related to the Fourier transform of the scatterer’s electron density.  In this way, M maps Y to P.  It turns out though that Y does not contain enough information to uniquely determine P, a vexation referred to as the phase problem.  With smaller molecules, the phase problem can be overcome directly (that is, using the 2nd kind of argument) with computational tools.  For large molecules such as proteins, a different approach must be used that dips into the 3rd kind of argument: We have to guess what the structure is (P), use the model in the opposite direction (P -> Y) to back out what would have the outcome of the experiment been had the structure been the one we guessed, and then compare that to the actual outcome.  We then modify our guess until we’re happy with how well the Y agrees.  You say, “Well, when you put it like that it sounds a bit hand-wavy,” but in fact, that’s the par of the course!

My point here is not to call out X-ray crystallography (we can save that for another post) – but only to use it as an example, and a particularly cogent example since it is a technique that is among the most vaunted and trusted in biochemistry.  Rather, my point is that increasingly, most scientific claims look like this no matter what the experiment is: Many spectroscopies give us data that are too complex to map to properties, and instead their knowledge-content is brought to fruition by calculating the spectra with ab initio methods.  The consequence of this is clear: The science of my generation will look different from that of those past in its deeper, more intimate, reliance on models.  This change is ostensibly being driven by the much faster rate of improvement of computers relative to experimental apparatuses, the result of which is that nowadays good models are much cheaper to make than good measurements (again, X-ray crystallography provides a wonderful illustration of this: The biggest hurdle Perutz and Kendrew faced in determining the 3-D structure of hemoglobin was figuring out what to do with the data!).  Whether or not this model-driven model-centric science will be a net good (e.g., more scientific output) or a net evil (e.g., more false-positives; less reproducibility) remains to be seen.

 

Science as usual

How efficiently should a laboratory be run to maximize its efficiency at doing science?

While at first this might seem like a rather odd question – potentially an exercise in semantics – its literal meaning is the one I’m going for, and I’m asking it with all sincerity and frankness.

Scientists have a difficult relationship with efficiency.  On one hand, we love it: The post-doc that can load ten plates of samples per day; the computer script that runs the analysis in fractions of a second instead of doing it by hand in Excel; the tightly-run schedule for instrument use that minimizes down-time and optimizes every researcher’s time using the machine.  All of these are considered good for science.

But sometimes in science, efficiency is bad.

Like the post-doc who loads samples with the efficiency of a robot – a slower post-doc might have run fewer samples, but chosen the samples more carefully, and in so doing increased the likelihood of getting a hit.  Computer scripts are powerful tools – especially when equipped with fancy statistical methods – but somehow there’s still something special about the human eye: slowly and laboriously plotting data can reveal patterns that are evident to a human that a program could never pick out.  Finally, it is important for laboratory resources to be delegated fairly, like an expensive instruments which is shared among many researchers who need it.  But one researcher hogging the machine after the allotted time  might notice a peculiarity in the data that leads to a discovery that would have gone unnoticed had she limited her time to the original booking.

All of these cases represent counter-examples to our intuition that businesses are more successful when they are run efficiently. But then again, science is not a business, as I learned recently for myself (read on).

The reason for writing my post is two-fold: first, a concern; second, a personal anecdote.

My concern is that science is increasingly being treated like a business; it is a trend that I call science as usual.  Successful scientists turn into managers; their labs turn into “research factories”; their air becomes formal and hierarchical; they lose their proverbial roots.  The trend is a result of many factors affecting the scientific community.  One is the need to publish regularly to meet the demands of funders, departments, and peers.  Efficient science is good at publishing regularly, whereas inefficient science is not.  Second is the need to be perceived as trying to address a “real-world” problem, which certainly requires more focus and a narrower range of goals than science for science’s sake.  This push comes from funders for sure, but also from society as a whole.

And next, an anecdote.  I have great respect for my research advisor, Prof. Boxer, and as you might imagine from what I’ve written, if asked to name adjectives that come to mind when I hear his name, “efficient” would not be one of them (“slightly chaotic” could have come up earlier!).  Sometimes, the inefficiency gets on my nerves, like when I am scaling a small mountain of e-mail to schedule a meeting.  But for the most part, the inefficiency is remarkably efficient at leading to good science!  If this sounds like a paradox to you, then you have comprehended my meaning!

In one instance, about 4 months ago, I had the challenging task of preparing a manuscript for a paper with three professors with very different styles and locations (my advisor being one of them).  I was feeling some pressure to get the damn thing submitted already, but Boxer was calling for yet another round of revisions and editing.  I decided that I could make a compromise that would please both parties by e-mailing out the latest version, soliciting for more revisions, but asking to please send me final edits by such-and-such a date, when I would submit.

I think it would have been a pretty reasonable plan if I was submitting a report to my boss at a company.

But my advisor wrote back a quick e-mail with the following words:

Stephen,  we do not put deadlines on scientific papers or other people’s efforts.  This isn’t a business.  s

When I read the e-mail, I smiled; I could practically hear his voice in my head.  And then I thought: He’s right.  Lesson learned.

Chemistry in the Post-sanitized Era

Whenever I have asked chemists above the age of 50 what originally excited or encouraged them to follow the path they took, their answers invariably include anecdotes about a Gilbert chemistry set, or their favorite local drugstore where, if they smiled gingerly enough, they could twist the shopkeeper’s arm into letting them buy a few chemicals to make their own explosions or stink-bombs.  The stories have a way of evoking a simpler time, when kids could be rambunctious rascals and budding scientists all at once, without sending their parents (not to mention the TSA) into conniption fits.

Like love at first sight, after the Gilbert chemistry set, the rest (first job, first major discovery, first major prize) was history.  It’s like the chemist’s equivalent of the sappy RomCom, and similarly it makes for a great story.  Circumstance has it that our protagonist stumbles by chance into his soulmate (chemistry), and after a few brief predictable wrong turns (like a flirt with physics), he gets the girl and lives a long and prolific (in chemistry papers, that is) life. But unlike sappy RomComs, you won’t see the “Gilbert chemistry set story” at the movie theater.  Not just because the movie would make almost zero revenue, but also because no one remembers the story any more.

Whenever I have asked my fellow peers what path brought them to the chemistry department’s doors, I find that absolutely none of us have that story.  None of us share the collective heritage that defined the previous chemical generation.  Our stories tend to read more like this.

We were sons (and daughters) of hard-working education-oriented families – frequently, first or second generation immigrants.  We went to school under the proviso to study hard in all subjects, stay focused, and not get into trouble.  We excelled in basically all sciences through high school (after all, a good college application can’t have any gaps nowadays), but perhaps we had a relatively good chemistry teacher.  In college, it was never too early to start thinking about what’s next, and of course in today’s economy, getting jobs is no simple feat, so we played it safe and checked out computer science.  But wait a sec, we thought that stuff was a bit dry and don’t we want to get our hands just a little wet?  Chemistry seemed okay, and in the worst case scenario, it’s a good pre-med option because it helps you stand out a bit more amidst the hosts of biology majors.  The decision to pursue chemistry as a career fell out of a happy undergraduate research experience with the professor providing just the right stoichiometry of mentorship and independence, direction and exploration.

Our stories are so full of qualifiers.  To put it in Hollywood terms, we are the annoying supporting characters in the RomCom who judge the central protagonist’s impulsive decisions, try to talk “sense” into him, and not fall for that crazy girl.  Many of us have never made a stink-bomb or an explosion before.  For us, the only part of chemistry that was spontaneous was learning what Delta G < 0 means.  We can tell you all about ab initio methods in electronic structure theory, but haven’t the faintest idea what you should mix together to make a homemade firework.   The long and short of it is that our sanitary society has no more room for the Gilbert chemistry set.

Perhaps my observations amount to nothing more than naïve nostalgia or filiopietism – after all, what could possibly qualify more of a “first world problem” than a reasonably privileged Stanford grad student pooh-poohing his upbringing bereft of cool chemistry sets?  Oh the travesty that we had to play internet games instead!  But I would beg to differ.

For one, I think it says something fundamental about society in general and science education in particular.  Has science become too hard to be fun? Also, the contrast between the chemistry set and the video game is not entirely frivolous: it is another reflection of our increasingly virtual way of life that prefers finding the answer on a computer instead of getting a little dirty and discovering it for ourselves.  Finally, I wonder if because of our differing formative experiences, will my generation of chemists  be less likely to try crazy random experiments purely because they sound exciting, in favor of  safe-and-steady research chosen on the basis of its optimizing the likelihood of getting funded?  I think that we already see that pattern developing.

So, goodbye Gilbert chemistry set.  You are missed.

A fine day for molecular dynamics

Generally speaking, I am not a “morning person.”  Like most graduate students, I tend to stay up till 1–2 AM and wake up between 8–9 AM.  But this morning, at 6 AM, my phone buzzed with a text.  A friend and co-worker of mine in India (where I suppose it was a ‘sane’ time) messaged me:

BTW, Warshel got the NP!

Normally, my reaction to my phone buzzing early in the morning is that I hazily read it, go back to sleep, and read it again when I’m in my right mind.  But this time, I was abuzz, and I was pretty sure going back to sleep was not going to happen.  Is it slightly embarrassing that I was so excited that Arieh Warshel, computational enzymologist, just won the Nobel Prize?

So here’s the back story.

My research in graduate school has focused on trying to understand the physical origins of enzymes’ catalytic power.  Enzymes are Nature’s miracle catalysts, allowing chemical reactions that would otherwise take longer than the age of the known Universe to take place in split seconds.  No man-made catalyst does nearly as well, and despite the incredible break-throughs in biochemistry, the active sites of enzymes remain as secret and mysterious as dark caves (which is actually what they normally look like in X-ray structures).

When I first started graduate school, I decided that I needed to switch my focus from synthetic catalysts to enzymes, and one of the first papers that I read on the subject that got me excited was:

“Electrostatic Basis for Enzyme Catalysis.” Ariel Warshel, et al. Chem. Rev.2006106, 3210–3235.

What the paper claimed is that the electrical forces that enzyme active sites exert onto their bound substrates are responsible for driving the substrates to react quickly and efficiently.  The authors used powerful computers in order to model the active sites of many enzymes and found this is a general aspect of enzyme catalysis.  In graduate school, my research has focused on testing these claims experimentally, which has so far only been supported with computer models.  Nevertheless, Warshel’s work played a significant role in shaping my research interests and graduate school project – so you can see why this day has been an exciting one!

More broadly, the three scientists who will share the prize are known as founders of a whole constellation of computational approaches that I and many other chemists use in our research, known as molecular dynamics – or MD for short.  So what’s all this MD-business about anyway?

Molecules come in many shapes in sizes – the simplest one is H-H (a molecule made out of two hydrogen atoms with a chemical bond between them), and enzymes are among the more complicated ones (consisting of tens of thousands of atoms, mostly carbon, hydrogen, oxygen, and nitrogen).  In general, you need quantum mechanics to explain just about everything about molecules.  In fact, using the classical physics we learn in high school, you would predict that atoms could never come together to form molecules (and physicists doubted the existence of molecules until the early 20th century).  Quantum mechanics is powerful and highly experimentally-validated: it makes many accurate predictions about molecules, including the way they move and interact with each other.  But quantum mechanics comes with a price: it’s very complicated – which means it takes a really long time to calculate the full quantum mechanical answer to a chemical problem.  Even the world’s most powerful computers can scarcely calculate quantum mechanical solutions for molecules with much more than 100 atoms.  That’s a far cry from an enzyme, with 1000’s of atoms.

So what can we do?  The answer is that we cheat.

If you were to perform a bunch of quantum mechanical calculations on H-H, you would find two important discoveries: first, that H-H is most stable when the atoms are ca. 1 angstrom apart (an angstrom is 10^-10 meters) – this is called the preferred bond length.  As you push the H-atoms apart or squeeze them in, the energy goes up as the square of the distance that you perturbed them away from 1 angstrom.  In other words, H-H is exactly like a spring.  In classical mechanics, we learned about springs and how they can be described by Newton’s equations of motion along with Hooke’s law.  So what this means is that we can table the whole quantum mechanics business and just treat H-H as if it were a spring, a much easier thing to think about.  Moreover, a computer would take a fraction of a second to calculate a spring energy, whereas it might take days to do the full quantum mechanics problem.  We now have a convenient short-cut!

Bigger molecules like enzymes have lots of atoms with many bonds, but we can take this “spring idea” and just keep on building on it, turning all the bonds into springs.  This way of turning a quantum mechanical problem into a classical problem is known as molecular mechanics, and the specific parameters that are needed to get this quantum-to-classical mapping right (without losing too much in translation) is called  a force field.  Don’t be confused! – this isn’t the same thing as a force field in Star Wars… although they are easily just as exciting and it would certainly be fair to say that the three Nobelists announced today – Michael Levitt, Arieh Warshel, and Martin Karplus – are chemistry’s equivalents of Jedi masters ^_^.  Jedis and springs aside, this is an exciting time for molecular dynamics.  We are at the point of figuring out how Nature’s complicated molecular machines work at the physical level, and this is helping us design new molecules for useful purposes (like medicines and fuels) using our brains instead of just by guesswork.

Death of a Scientist

I am rather impatient right now.

Here’s the issue.  Back in April, I started writing a manuscript (what you call a paper before it is published) about a fancy-schmancy computational method that I and some co-workers developed.  We were pretty excited about the results; we found that our model was able to reproduce a large range of experimental observations, and even got to make some new predictions to boot.  Moreover, we found that the traditional (less fancy-schmancy) models that most scientists would use do not always get the right answers, and we found what those models get wrong!  – things like this get scientists excited, because it’s nice when your hard work can persuade other scientists to do things your way.  In any event, I finished the first draft in June, sent to my collaborators, got a bunch of feedback, version 2 turned into version 3, which somehow managed to become version 8.

At this point I’m ready to submit, and pretty everyone is on board.  Except for one very important (and stubborn) person.

My advisor.

We’ve been sitting on this thing now for the last 2 months, and the main reason is there’s this pesky super-technical thing that we’re trying to sort out.  Yes, it is important; and Yes, I appreciate that we want this paper to be the best darn thing that’s happened to computational electrostatic modeling since sliced bread (in this context, sliced bread probably would be the particle mesh Ewald summation method), but does my paper really have to solve every problem in the known universe?? (or the universe of computational electrostatic modeling).  Anyhow you get the gist.

In response to one of my “reminders” that we need to submit the paper soon, Professor Boxer told me this:

You know, Stephen; long after we’re dead and gone, people will hopefully still be reading your papers.  They’ll be trying to learn about what you did and trying to use it.  And if something’s wrong or missing, they won’t be able to send you an e-mail to get it figured out.  What you write down matters.  So act that way.

He went on to tell a sobering (and not very flattering) anecdote about another professor, with whom he once collaborated on a paper.  The other professor in this case was the corresponding author, which means the person where the buck stops at the end of the day.  After the paper was accepted, my professor noticed an error in one of the figures.  He promptly alerted the collaborator, said, “Hey, we need to fix this ASAP before this goes to press” but the collaborator said, “Meh.  Reviewers didn’t notice it.”  The point couldn’t be made any more clear: “Not everyone is careful, and I’m not training you to be one of those types of scientists.”

The anecdote was germane though in a number of ways.  As it turned out, the day Professor Boxer gave me his advice was just hours before the most serious holiday of the Jewish calendar – Yom Kippur.  One of the themes of the day is to reflect on one’s own personal shortcomings over the previous year, and strive to fix those aspects for the upcoming year.  While the main thrust of Yom Kippur is related to ethics and human interactions, I was stimulated by my advisor’s comment to think more  about the scientific forms of these themes: What as a scientist do I owe to other scientists, both contemporary and of generations yet to come?  What can I do this year to be a better scientist than I was last year?  What do I want to be my scientific legacy?  Like other hard-hitting questions that one confronts in services at Yom Kippur – many of these questions do not have straight-forward answers.  I think there is still value in taking time out to reflect on them, as they are questions we do not frequently confront day-to-day in the lab.

A Nobel Pursuit

In the south of Germany bordering Lake Constance there is a small sleepy medieval town called Lindau.  As far as I can tell, not so much happens in Lindau, except for the fact that for one week every year, 30-or-so Nobel Laureates convene there to meet and interact with the next generation of promising young researchers from across the whole world – a fairly big affair, if I do say so myself!  Most years, the meeting is the largest single gathering of Nobel Laureates, and a few weeks ago, I was lucky enough to be there too.

As a conference, it is probably fair to say the Lindau meeting is one-of-a-kind.  As a graduate student, you don’t get too used to fancy meals, seeing the media and press running around everywhere, or picturesque European villages – so all of these were pretty new for me.  At most conference seminars, you expect to hear about something very recent, cutting-edge, and perhaps slightly esoteric; at Lindau, you’re more likely to hear about something that you read in a textbook.  For instance, during Richard Ernst’s lecture, he discussed his insight, inspired by music, that signals from nuclear magnetic resonance (NMR) could be detected more quickly, more accurately, and with more sensitivity by shooting all frequencies on a sample at once, as opposed to one at a time (the Fourier transform is the secret trick that makes this fairly unbelievable statement true).  To a modern chemist, the thought of doing NMR any other way is almost inconceivable; the thought of not having it at all – terrifying.  The world of chemistry without Fourier transform NMR would be like trying to solve a crime without fingerprints or forensics: you wouldn’t have any  unique identifier on who you’re trying to find, so you’re left with indirect clues.  The striking thing about it all is that these developments are really not that old (Ernst won the Nobel Prize in 1991); within the span of my own life, a scientific era of sorts started, and may even end in the wake of an equally great discovery.

Probably the most interesting and unexpected thing I got out of the Lindau meeting was an opportunity to reflect on where we’ve come from, and where perhaps we’re going as chemists.  The pace of scientific change is truly staggering, and this tends to be least appreciated by scientists themselves.  Why?  Because if a new theory or tool fails to make reliable predictions or be useful, it is quickly forgotten; but if it succeeds, it just as quickly becomes accepted as inevitable and recedes into a scientific background upon which all future claims have to be based.  Effectively, it becomes a “new normal,” and what was in fact a historically recent contribution becomes perceived as a timeless truth of nature, especially to younger generations being taught from textbooks.  What Lindau did for me is deconstruct this faceless (and fictitious) account.

We had a number of opportunities to chat informally with these eminent scientists and also got to hear about their personal scientific journeys.  So what makes a Nobel prize winner?  Obviously, they spanned a rather wide spectrum in terms of interests, personality, and disposition (I’ll table the gossip for now… :p ) – but this much seemed to be fairly general across the board: they were not megalomaniacs seeking fame, prestige, or admiration.  They were not workaholics, fiercely competitive with their peers.  They were persistent and curious.  They were deeply committed to solving an important particular problem that they were strongly personally attached to; they stayed focused on that problem over long periods of time (sometimes in spite of doubt and detractors); and then they were met with a blessedly large dose of luck.  They did not throw themselves into the fray of “popular” or “fast-moving” fields, but rather doggedly pursued their own research… which would later create a new field and then become popular.

I hope I can carry these lessons with me and embody them in my own future as a scientist.  Nobel prize or not, I think it will all the same be a noble pursuit.

What scientists want

To most outside observers, it would appear that scientists spend most of their time worrying about science.

In fact, we spend most of our time worrying about papers.

Papers (or more formally, peer-reviewed journal contributions) are a source of endless consternation.  There’s of course the preëminent question of How many papers do I have?  This is the scientist’s equivalent of wondering if you have enough friends or make enough money. You also frequently hear scientists concern themselves with where their papers are (meaning, in which journals).  After all, quality is more important than quantity, right?  Surely a finding of great importance and broad interest (such as the complex dynamics of the Brangelina pathway) deserve to grace the pages of People as opposed to the lower impact factor Us Weekly.  On the other hand, in science just like in celebrity gossip, it’s not so easy to decide what’s more important than something else, and the matter is strongly up to personal taste over objective criteria.  And nothing in the 21st century is complete without Big Data creating labels and metrics that purport to organize complex systems that are not actually amenable to statistical description.  Scientists get to deal with a whole host of them: the citation count (like television ratings), the Hirsch index (an awesome-ness meter), among others.

Perhaps the point was made most directly by the eminent chemist George Whitesides, who once wrote:

If your research does not generate papers, it might just as well not have been done.  “Interesting and unpublished” is equivalent to “non-existent.” (Adv. Mater. 2004. DOI: 10.1002/adma.200400767)

So what’s the big fuss?  With all their intelligence, energy, and devotion – why do scientists worry so much about something that seems so petty in the face of the grandeur of Nature?  Why have we become slaves to MS Word, Powerpoint, Adobe Illustrator, and LaTeX when we could be cloning a new gene or building a new laser?

I will offer one reason, but I am sure that many readers will be able to offer more.

The importance of the scientific paper is really a statement about the importance of communication in science.  Research remains in many ways an inherently cloistered pursuit, and so great effort is needed to break this tendency, so that results can be shared with a community, and so bridges can be built between scientific islands.

The scientific paper has a beautiful history.  The earliest scientific papers were missives sent between late 17th century philosophers, who wanted to explain their findings to their friends – oftentimes over seas (there was a degree of literalness to the aforementioned “islands”!).    These thinkers were just too eager to share exciting new information to take all the time to write a whole book and try to publish it, and so the scientific paper as we know it was born.  The ethic the paper embodied was an impatience to share, and a willingness to do so without explicit recognition.  In my opinion, the lionization of the paper is (at its best) an expression of a socialistic framework that puts the agenda of the whole scientific community ahead of the scientist-self.

The paper has traveled a long way from its humble beginnings, and in the 21st century, it has taken on some very new affects.  The scientific paper now also satisfies a human need to differentiate oneself from others and create hierarchy, constituting a prize that scientists are pit against one another for.  Institutions play into this more and more, searching for ways to quantify and compare the relative importance of scientific works.  The scientific paper also feeds the need for recognition, admiration, and sense of self-worth, in a sense becoming a de facto currency in a world that doesn’t place much stock in actual money.  While some of these aspects are reasonable to various degrees, I cannot but help think sometimes that in our brave new world of citation counts and Hirsch indexes, we have forsaken the original purpose of the scientific paper – we have forgotten its history and original meeting.

I do believe that scientific papers  are an absolutely essential component of the scientific pursuit – and are arguably even worth all the fret.  But scientists must also remember what these documents really represent – the fact that we are one of few communities whose currency is measured not in what we own, but in what we give back.

Better late than never

I had a professor in thermodynamics who was so excited to teach the first lecture that he forgot to introduce himself to the class.  Fortunately, one of his TA’s reminded him of such, and on the second lecture, he bashfully handed out the syllabi, announced his name and office hour times, etc.  “Better late than never,” he admitted to us.

Likewise, I thought that the second post would be a good time to introduce myself to the readers (after all: if I weren’t excited to teach you something, would you have much of  a reason to get to know me?).  People are generally surprised to find out that I was born in Kansas City, Kansas.  Probably because KC doesn’t seem like a very science-y place, or a very Jewish place to most people (I tend to prominently exude both).  In any event, for me this is where I first started to like science a lot.  I was lucky to have parents who wanted me to follow my interests, and to attend one of the top high schools in KC (Pembroke Hill) that provided an enriching environment in addition to my home.

Here’s a little timeline-biography that I made to help fast-forward you through the last 13 years.

Isn't life simple when everything leads to one final goal?

Isn’t life simple when everything leads to one final goal?

Like most biographies that you’ll find of scientists (on say wikipedia or university department websites), this timeline is pretty barebones.  It tells you where I went to college (MIT), what I studied there (chemistry and physics), where I went to grad school (Stanford), who advised my graduate research (Steve Boxer and Vijay Pande), and what I did with them (molecular biophysics).  To a fellow scientist,  these facts are the equivalent of a social security number – actually, since so many scientists are international, they’re more important than my social security number.  These few tidbits about me are like a stamp or barcode from which other scientist will summarily discern : 1) whether I should be taken seriously; 2) whether it is worth his/her time to hear about my work; and 3) whose “side” of various academic debates/feuds I’m on.

What the timeline doesn’t tell you is what my philosophy on science is, why I chose to get into research, what I find rewarding about it.  It doesn’t tell you how many times I switched my major, or that it was purely a matter of chance that I joined Boxer’s lab.  And in fact, you almost never see information like that in a biography of a scientist.  The canonical scientific timeline creates a falsely linear representation of a life.  Or rather, if life is a high-dimensional data-set, then this is a 1-dimensional projection of it.  When you read my timeline, you’re persuaded to think that somehow the stars were aligned in 1987 for me to become a molecular biophysicist, and that everything in my past was orchestrated to leads to this present.  Scientific biographies seem to prop up this narrative about who scientists are and how they got themselves there.

But none of that is true.  And I would argue it isn’t true for the vast majority of (good) scientists.  It is convenient to appropriate the past to justify the present, but that can only ever be done in the present.  In fact, in the past at various times, I thought I was going to become a movie director, a judge, a cell biologist, a science fiction writer, a synthetic chemist, a materials scientist, and a nanotechnologist – roughly in that order.  It has been like a Markov chain in “discipline” space!  And while it is true that things would have been nicer or more efficient if I had just known all along I was going to be a biophysicist, that was not reality, and had it been so, I must believe that I would not be myself, but probably some much duller person who wouldn’t be writing this blog.

So, yes, I am a biophysicist.  But I am also a piece of intellectual clay.  I am molded by circumstance, by inspiring words, by the world around me – and I am not ashamed to admit it.  I can be a contradiction: an amalgamation of hypotheses and beliefs that are not always mutually comprehensible.  Is this how a scientist ought to be?  I am not sure, but I am going to say yes.

After all – if I am not for me, then who will?

The Lab Tourist

Hello world –

Hope this message finds you well.  My name is Stephen Fried, and I am training to become a scientist. With this blog I invite you to join me as I venture into a strange but wonderful world – the scientific world.  As I cross certain milestones in this journey, I hope to share with you the triumphs and travails that come with this odd line of work that I have chosen for myself.  Along the way, I also plan to tell a few funny stories and philosophize a bit too.

For most of us, science is a closed book (I for one didn’t open up my science textbooks in high school).  Medicine is something that we all experience directly (and often viscerally) when we go in to get a check-up or hear about a friend in the hospital getting surgery.  Technology we use and hear about everyday; we know first-hand that our lives would be very different without it.  Science is often thought of as standing abreast with the likes of medicine and technology – but for most, our primary relation to it is faint memories of a 10th grade class that was poorly-taught, rather hard, and not particularly fun.  Even those of us who are enthusiastic about science and defend it, our inspiration comes more often from Star Trek or sound-bites from pundits, than it does from direct (or even indirect) exposure with data, models, or theories.

Why is this?  The most important reason is that science, unlike medicine and technology, is not tangible.  You can’t touch Newton’s laws like you can touch an iPhone or a leg prosthetic.  You can’t sell science on the stock market, or use it to go to Hawaii.  On a number of occasions, I’ve been asked to give a lab tour, normally to a younger sibling of a friend.  The “tourist” is normally a bright young kid, with a lot of enthusiasm and a tad nerdy (as I was!).  At the end of an hour of pointing out and explaining a number of instruments and materials I use, the lab tourist normally says, slightly disappointed, “Well – I liked it, but your lab looks like all the other ones.”  And it’s true; most labs (at least in biology and chemistry) do look alike (I sometimes wonder if it was intentionally planned that way).  “That’s because,” I explain, “what makes each lab different from other labs is not their instruments, but their ideas.”  Science does not live in a place.  It lives in that sudden spark when it all just makes sense. It lives in the conversations between scientists – sometimes cordially discussed, sometimes heatedly argued.  It also lives in conflict, ambiguity, and contradiction.  At this point, the lab tourist is exasperated with me, and it’s time for him (and it normally is a him) to get on with the day. As for myself, I am left a bit unnerved that I was unable to successfully express my enthusiasm for what I do, especially to someone who should have been pretty easy to convince!

The second reason why science is closed off to many of us is a much simpler and rectifiable one: we simply don’t know any scientists from whom we could hear a primary account.  Many Americans have a doctor or engineer for a relative, friend, or neighbor; but how many of us keep a scientist in our company?  And even for those who do have scientist friends, how often are we willing to probe them, and when we do, how often is the response we elicit a deferral: “Well, I’m not an expert in such-and-such.”

There are a number of large hurdles for those people who want to engage with science, but cannot spend their entire lives studying it.  This blog is about my small attempt to break down those barriers.

I want to share with you what goes on in science.  I want to give you a glimpse into this (my?) world.  And most importantly, I want to talk to you frankly about science as a process, not as a finished pristine product, which is the only form of science we encounter in NY Times articles and sound-bites.  Basically, I want to give you a lab tour, albeit a somewhat unconventional one.

So join me and read along.

Chances are it will be more interesting than 10th grade.