Primer: Electrophysiology

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

It’s been awhile since I posted one of these, but as I’m working on radically different science than I have in years past, and people ask me “what I do,” I figured I should take the time to explain, to some degree.

Wikipedia defines “electrophysiology” in the following way:

Electrophysiology (from Greek ἥλεκτρον, ēlektron, “amber” [see the etymology of “electron”]; φύσις, physis, “nature, origin”; and -λογία, -logia) is the study of the electrical properties of biological cells and tissues. It involves measurements of voltage change or electric current on a wide variety of scales from single ion channel proteins to whole organs like the heart. In neuroscience, it includes measurements of the electrical activity of neurons, and particularly action potential activity.

So, in the most general sense, I’m “listening to neurons talk to each other,” and occasionally, “interrupting their ‘conversations'” in various ways.  When I talk about “conversations,” I’m referring to the act of neurotransmission, whereby one neuron sends a chemical signal across a synapse to another neuron, resulting in the propagation of that signal (an action potential), or sometimes the inhibition of another signal.

As I talked about in a previous primer, in order for an action potential to occur, various ion channels in the membrane of a neuron must open, allowing sodium (Na+) from outside the cell to come in, and potassium (K+) to go out.  Other ions will play roles as well, including chloride (Cl-) and calcium (Ca2+).

Using electrophysiology, it is possible to measure the movement of these ions across a cell membrane using relatively simple principles of physics.  Specifically, [V=IR], or [voltage = current X resistance].  If you hold two of the terms of this equation constant, it is possible to determine the third term.  Effectively, we do this using a “patch pipette,” a small, sharp, glass tube that has a wire electrode running through it.  If you know the resistance of the pipette, and you hold the electrode at a constant voltage, you can measure the current across the membrane of a cell (i.e. the flow of ions).

In short, this diagram describes the actual process of making this measurement, using a technique called “patch clamp“:

Looking through a microscope (like the one pictured above), you move one of these glass electrode pipettes to be just touching the membrane of a cell.  You have to be very careful so you don’t puncture the cell, thus damaging the cell membrane to the point where you can’t make accurate measurements.  You then apply a small amount of suction using a syringe to actually suck some of the cell membrane inside the pipette.  Once you have a strong seal formed (typically termed a “gigaseal”), you can apply a brief, large amount of suction with your syringe to rupture the membrane of the cell, where now, the inside of the cell is being exchanged with whatever you put on the inside of the pipette.  The internal solution of a pipette is usually something like potassium, basically trying to recreate what the inside of a cell would be, aside from all the organelles, however you can add compounds or drugs to manipulate the actions of channels you are trying to study.  Typically, though, you apply drugs to the outside of the cell, as well.

So, a real-world example of how this technique is used would be in my study of NMDA channels.  The NMDA receptor is a sodium channel and is very important in neurotransmission, but especially in memory.  When I have a cell “patched” like in the diagram above, I can apply the drug, NMDA, to the cell and see a large sodium current on my computer screen, kinda like this one.

So, over time, when a drug like NMDA or this “Blocker” is applied, you can see a change in the current (measured in “picoamps”) across the membrane of the cell.  In this case, we would read these data such that NMDA opens its channel and sodium ions flood inward, then that current is reduced by the “Blocker” that was applied for a few seconds, and then once the application of the “Blocker” was stopped and NMDA alone was applied to the cell, the inward sodium current increased again.

These traces allow you to get information about how channels are opening, what ions are flowing in what direction, and to what degree drugs like this “Blocker” are affecting channels.  It is work like this, for example, that led to characterization of benzodiazepines and barbiturates, drugs that interact with the GABA receptor, a chloride channel.  Without these techniques, it is difficult to know how a drug is affecting a channel at the cellular level.  Just about every cell in your body has channels of some kind, as they are very important for maintaining the function of that cell.  Neurons are just highly specialized to require ions more than some other cells do, though heart cells are also studied in this way, among others.

Effectively, these techniques allow you to determine how a cell works.

Primer: Psychopharmacology, Part I

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

It’s crazy to think that I’ve been posting these things monthly since last June.  For my first Primer, I talked about Pharmacology, as I had just completed a Ph.D. in it.  Now, a year later, I’ll elaborate further on the subject that got me interested in it in the first place: psychopharmacology.

As I wrote back then, I took a class at Truman State based out of the Psychology department that taught students about psychopharmacology, defined as:

Psychopharmacology — noun

the branch of pharmacology dealing with the psychological effects of drugs.

In broad strokes, we’re talking about how a drug can change your state of perception, whether it causes or alleviates hallucinations, alters your mood, dampens your emotions, and so on.  Something that changes your “normal psychological state” to something else, whether that be therapeutic or “recreational.”

In order to grasp what happens in your brain when your mood is changing, you need to have a basic idea of the structure of the brain and neurotransmission, both subjects I have discussed in the past.  For example, much of your cognition happens in the brain region called the Cerebral Cortex, and it is dependent upon neurotransmitters like acetylcholine and dopamine.  Alternatively, emotions like anger, aggression and fear tend to be centered in another region called the Amygdala.  Bear in mind that the varying areas of the brain “talk” to each other, and if you affect the signaling in one area, you may very well affect another area.  This may well be the point of any pharmacological intervention, but frequently, you get undesired consequences we call “side effects.”

Let’s look at the Cortex first.  Schizophrenia, a disease characterized by delusions, hallucinations and disorganized speech or hearing, is thought to be caused by misfiring neurons in the Cortex that release dopamine.  Therefore, if your cortical neurons are releasing too much dopamine, for any reason, you can end up with hallucinations and delusions, etc.  Interestingly, you can induce schizophrenic-like symptoms in an individual if you give them amphetamine or cocaine, both of which also increase the release of dopamine, though on a wider scale throughout the body.  For those with Schizophrenia, you typically prescribe an antipsychotic, a drug that inhibits dopamine release or reception.

The trick with drugs like antipsychotics, however, is that you want to inhibit dopamine release in the cortex, yet you want to limit that drug’s effect on other areas of the body where you still need dopamine release, or other neurotransmitters like norepinephrine that are responsible for completely different things (hence, side effects).  For example, if you were to design a drug to limit release of dopamine, you could fix their symptoms of Schizophrenia, but you could also affect mobility, as dopamine is responsible for voluntary control of movement.

This is how we arrived at “typical” and “atypical” antipsychotics.  The “typical” drugs were the first-generation antipsychotics that did a reasonable job at limiting schizophrenic symptoms, but also affected other dopaminergic neurons in your body (i.e. your movement).  People on these drugs for decades frequently came down with a movement disorder called Tardive Dyskinesia.  The second generation “atypical” antipsychotics were more specific to the Cortex, and limited schizophrenic symptoms while mostly leaving other dopaminergic signaling pathways alone, thus alleviating dyskinesias.

As another example, Depression is a mood disorder that makes you feel sadness, anxiety, and general hopelessness.  This disease is thought to involve the limbic regions of your brain, which includes the amygdala and the prefrontal cortex.  Depression, however, is opposite of Schizophrenia in that it represents a lack of the neurotransmitters serotonin and dopamine.  The drugs of choice used to be TCAs (tricyclic antipsychotics), a drug that blocked the reentry of serotonin and norepinephrine into neurons, thus prolonging the activity of these neurotransmitters.  In short, it made your serotonin work longer than it usually does, thus alleviating the need for production of more.  As with Schizophrenia, this earlier drug class generated a large number of side-effects because it affected norepinephrine and serotonin throughout the body.  Because TCAs worked on norepinephrine, that also meant that its action would increase in your body, for example, affecting your blood pressure through action on your blood vessels and causing arrhythmias due to action on the heart.  Once SSRIs were developed, they rapidly replaced the TCA drug class because they were more specific toward only serotonin and not norepinephrine.

Both Schizophrenia and Depression are examples of psychological disorders that can be treated effectively with some kind of pharmacological intervention.  Frequently, a given patient will end up trying multiple different drugs over the course of their treatment, and sometimes in various combinations.  Unfortunately, there isn’t a single “silver bullet” for taking care of a given psychological disease, as most people manifest the disorders in different ways, with different drugs being more effective at treating different symptoms.  While an SSRI may prove useful in the short-term, it’s possible a doctor will prescribe a TCA later on after the SSRIs lose their effectiveness.  Antipsychotics act similarly.  And more research is being done on new classes and new modifications to old drugs in order to make them more effective, and especially more selective toward their specific target(s).

The larger point to all of this is that the study of psychopharmacology is an effort to control one’s emotions and behaviors while not affecting the other aspects of their day-to-day life (i.e. side effects).  These drugs typically manipulate neurotransmission to some degree, and hopefully have some kind of selectivity toward specific aspects of a given disease rather than affecting all transmission of that particular compound.  This can be difficult, and can take decades to fully investigate, but it is certainly possible.  As researchers develop more complex maps of the brain, with more detailed pharmacological profiles, new drug classes can be produced that are more specific to a given individual’s needs.

As this is more than long enough, and I still have more to say on the subject, stay tuned until next month when I hit up Part II.

Primer: Cell Death

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

A good portion of my graduate work centered upon how a given cell will die when exposed to a specific toxin.  In order to develop therapies to prevent the death of that cell, the means by which a cell dies is important.  It’s also important how a cell doesn’t die, as I’ll explain later on.

We’ll keep this somewhat simple, though.  There are two (very) basic ways that cells will expire: necrosis and apoptosis.  Necrosis involves the destruction of the cell and, frequently, damage to surrounding cells.  Essentially, the cell ends up swelling and exploding, allowing the intracellular materials to leave and get into the surrounding tissue.  Frequently, necrosis is accompanied by extreme inflammation, causing things like white blood cells/macrophages, the cellular defenders against infections and invaders, to get to that area and try to clean it up.  In the process, they end up creating more damage.  Think of it like a “Scorched Earth” policy of eradication of a given problem.  “Take it out and everything around it to make sure we cleared it up.”

Apoptosis, on the other hand, is thought to be much more controlled.  It is a form of “programmed cell death,” meaning that there are mechanisms built into a cell to allow it to fail properly (unlike the United States banking industry…).  Effectively, when specific signals are received, the cell begins the process of dismantling itself, chewing up its own proteins, shutting down its processes, and packaging itself up for a clean removal by nearby macrophages.  Rather than the “Scorched Earth” means of cleanup, it’s more like putting things in trash bags and putting it out on the curb for the garbage truck to come by and pick them up for you.

Apoptosis is an extremely important process for other things, though.  In the early development of an organism, for example, the neural pathways of the brain and spinal cord are set up such that some neurons will make the proper connection and others won’t.  Those that make the proper connection with their target are strengthened, while those that don’t receive an apoptotic signal to shut themselves down and make way for other neurons.  Cancer, however, is an example of a disorder where the proper apoptotic signals are not received and the cell decides not to shut itself down as prescribed.  Instead, it can’t receive or interpret the signals and continue to reproduce themselves.  Eventually, it gets to the point where even the “Scorched Earth” means of eradication by inflammation doesn’t work.

So in general, your body would prefer to go the “apoptosis” route over the “necrosis” route, as the latter tends to produce quite a bit more damage to surrounding cells and tissues that your body would have to repair afterwards.  Once a cell has started down the path of necrosis, it’s difficult to turn back and save it.  Apoptosis, however, can be limited because it is so dependent upon intracellular signals.

This image is only a fraction of what’s actually going on in apoptosis, but does contain some of the basic signalling mechanisms.  Each of those little acronyms is a protein, coded for by a gene in your DNA.  Some of them are turned on because of a signal sent from outside the cell, while others are turned on when the cell starts doing something it shouldn’t, so it tells itself it needs to shut down and dismantle itself.  However, the key point is that there are ways to use inhibitors toward those proteins to slow down the death of cells, if not stop the death entirely.  Alternatively, in the case of cancer, some of those signals above aren’t functioning properly, and if you can determine which signal isn’t working, you can try to replace it, or “skip over” it and start the signal further down the line.  Think of it as a game of telephone where each of those acronymns above is a person, but “cancer” occurs when one of those people decides not to continue the game of telephone.  We could potentially use drugs to “skip over” that person and keep the game going, or to finish the analogy, to keep apoptosis going.

A lot of what I just said, however, is determined by the ability to personalize medicine.  There are a battery of tests that people are run through when they are diagnosed with cancer, but right now, only a few types of cancer can be targeted in such a way.  Usually, we just go the “Scorched Earth” route, much like your own body does, but instead we use radiation and chemotherapeutics.  Eventually, however, once drugs can be personalized to the individual (e.g. figuring out which person along the telephone line isn’t continuing on with the game), then we should be able to target that cancer specifically and shut it down.  Unfortunately, each person is different and each cancer is different (i.e. it isn’t the same person stopping the game in everyone’s situation: it’s sometimes someone else).  Each cancer has to be checked individually for which signal isn’t working, and that takes lots of time and lots of money.

But science and medicine is getting there.  Slowly, but surely.

Primer: Drug-Drug Interactions

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

For a combination of reasons, there are quite a few folks out there today that have a cocktail of drugs pumping through their blood stream.  The elderly, for example, at any given time, can be taking upwards of 10 different medications to manage their back pain, arthritis and blood pressure…and then the depression they feel because they are on so many drugs.  It’s bad enough that they have to be on so many meds, but then when they go to the hospital with another problem, the doctors have to slowly pull them off the drugs they are already on in order to isolate the problem, and then come up with a new cocktail of drugs.  This is especially a problem because so many people have multiple different doctors, some of which aren’t aware of what medications (i.e. type and dosage) their patients are taking.  And those doctors will sometimes disagree with each other and change the medications back and forth depending on which doctor sees them on a given visit.

But that’s a different discussion.  🙂

All doctors and pharmacists are aware of what are called “Drug-Drug Interactions,” which is basically the idea that one drug you are taking can counteract the effects of another, either by directly interacting with the drug itself, or with the receptors that another drug is trying to access.  Very commonly, especially in the case of the elderly, it can also occur during metabolism, the act of breaking down a drug so it can be excreted from the body, and effectively inactivated.

The common example of a drug-drug interaction involving metabolism (as taught in graduate school and medical school) is that of grapefruit.  Terfenadine, for example, was a very popular antihistamine that is metabolized by action of a specific cytochrome P450 enzyme, CYP3A4.  It turns out that components of grapefruit juice (and the antibiotic erythromycin, amongst others) are also metabolized by cytochrome P450, specifically CYP3A4.  In order for Terfenadine to be effective, it has to be converted by CYP3A4 into its “active metabolite” (i.e. the drug that actually helps you isn’t terfenidine itself, it’s the metabolite of terfenidine).  If you are drinking lots of grapefruit juice, you don’t get that active metabolite formed and you keep excess terfenidine around in your body.  Unmetabolized terfenidine, unfortunately, causes arrhythmias of the heart (which is what led to its withdrawal from the market).

So in this case, something as simple as grapefruit juice caused a drug to not function properly, leading to unwanted, and unsafe, side-effects.

Another example of drug-drug interactions via metabolism is the combination of acetaminophen (Tylenol) and alcohol.  Acetaminophen is primarily metabolized by cytochrome P450 isoforms CYP2E1 and CYP1A2 to a compound called NAPQI, which is then further converted using glutathione to innocuous bi-products.  NAPQI can cause severe liver damage if it hangs around too long.  It turns out that the process of metabolizing alcohol also takes advantage of glutathione.  If you are drinking alcohol and you take acetaminophen, it’s very likely that your liver will produce more NAPQI than it can deal with (i.e. due to decreased glutathione levels caused by the alcohol), thus causing acute liver toxicity.

Those are a few examples of how metabolism of one drug can affect another drug.  How about absorption of drugs then, eh?

Tetracycline is an antibiotic that many of us have taken or will take within our lifetimes.  It is formulated so that it tends to have metal ions in with the pills you take.  You shouldn’t take tetracycline along with antacids, however, as antacids tend to also contain aluminum.  Aluminum ions from antacids, or iron from supplements, can form what they call a “chelate” with tetracycline, reducing the ability of your body to take it up into the blood stream.  The same thing happens with calcium ions, so you can’t take tetracycline along with milk, yogurt, or other dairy products.

You can also get what we call “additive” or “synergistic effects” when you take two drugs that do effectively the same thing in a different way.  For example, people take nitroglycerin in order to cause vasodilation, and it does so by producing nitric oxide that then elevates cGMP in vascular smooth muscle cells (ultimately, cGMP is responsible for relaxation of muscle cells, thus allowing your blood vessels to open up further).  Sildenafil (Viagra) elevates cGMP by inhibiting one of its primary metabolizing enzymes.  Moral of the story is: if you are taking Viagra, and you also take some kind of nitrate like nitroglycerin, you can give yourself catastrophic hypotension (i.e. a huge drop in blood pressure).

Warfarin is an anticoagulant with a very small “therapeutic window,” which means that too much or too little of the drug can cause some serious damage to your body.  You have to be very careful when you’re on warfarin, because any variation can cause you to either form a blood clot, causing a stroke, or not clot enough, causing you to bleed out.  Aspirin is a drug a lot of elderly folks take just to help with their heart.  Typically in a low-dose form, aspirin is good to help limit your risk for heart attack and stroke, but if you take any aspirin while you’re also taking warfarin, you can dramatically increase your chances of bleeding, especially gastrointestinal bleeding: taking them together can increase your risk almost four-fold.

All of the preceding examples illustrate how one drug or compound can affect the ability of another drug to work or to be broken down, or in some cases can actually increase the effect of another drug or compound on your body.  The moral of the story is to remain cognizant of what drugs you are on and in what dosage.  Most medical professionals are aware of potential interactions between different drugs, and the examples listed above hopefully illustrate why they need to be aware of what you are taking and why.  If you have elderly parents or grandparents, it is extremely important that they keep a list of medications that they are currently taking with them at all times, especially if they see different doctors for different ailments.  If they were involved in a car accident and needed to go to the emergency room, it would save time and effort to have an up-to-date list of their medications with them, rather than having E.R. docs search to figure out what they are taking.

Of course, if you, yourself, are taking multiple medications now, or know others that are, it is equally important for you, too.  Most drugs will have warning labels on the side of the packaging that help you know what you can take a drug with and what you can’t.

Just bear in mind that, if you really like drinking a vodka and grapefruit juice before bed every night, you may need to tell your doctor before they prescribe anything to you.  🙂

Primer: Drug Discovery

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

There are a few ways to approach the general idea of drug discovery, but I’m going to try and tackle it from the historical treatment first, and maybe revisit it in a future Primer.  I am part of the Division of Medicinal and Natural Products Chemistry at the University of Iowa, and the two components of it, Medicinal Chemistry, and Natural Products, are both integral to the idea of developing new drugs.  Medicinal Chemistry is just as it sounds: the study of designing and synthesizing new drugs, using principles of chemistry, pharmacology and biology.  The idea of Natural Products, however, is a bit more interesting in that, just as it sounds, it studies chemical compounds “developed” in other organisms that may be useful as drugs.

The oldest records tend to cite the ancient Chinese, the Hindus and the Mayans as cultures that employed various products as medicinal agents.  Emperor Shen Nung, in 2735 BC, compiled what could be considered as the first pharmacopeia, including antimalarial drug ch’ang shang, and also ma huang, from which ephedrine was isolated.  Ipecacuanha root was used in Brazil for treatment of dysentery and diarrhea, as it contained emetine.  South American Indians chewed coca leaves (containing cocaine) and used mushrooms (containing tryptamine) as hallucinagens.  Many different examples of drug use in ancient, and more modern cultures, can be pointed to as early forerunners of today’s drug industry.

However, it was the 19th and 20th centuries that really kick-started the trend, as this is when modern chemical and biological techniques started to take hold.  It was in the 19th century when pharmacognosy, the science that deals with medicinal products of plant, animal, or mineral origin, was replaced by physiological chemistry.  Because of this shift, products like morphine, emetine, quinine, caffeine and colchicine were all isolated from the plants that produced them, allowing for much purer, and more effective, products to be produced.  Advances in organic chemistry at the time really helped with the isolation, so these discoveries wouldn’t have been possible previously.

In today’s world, there are a few ways you can go and discover a new drug:

  1. Random screening of plant compounds
  2. Selection of groups of organisms by Family or Genus (i.e. if you know one plant that makes a compound, look for more compounds in a related plant)
  3. Chemotaxonomic approach investigating secondary metabolites (i.e. Drug A functions in your body, then is metabolized in your liver to Drug B, which also happens to be functional)
  4. Collection of species selected by databases
  5. Selection by an ethnomedical approach

I think the latter two are the most interesting, especially with a historic perspective.  With the latter, we’re talking about going into cultures (a la the movie “Medicine Man“) and learning about the plants that they use to cure certain ailments, then getting samples of those plants and figuring out what makes them effective.  It has been estimated that of 122 drugs of this type used worldwide from 94 different species, 72% can be traced back to ethnic groups that used them for generations.

The discovery of new drugs of this type is actually somewhat worrisome as these cultures die out or become integrated into what we’d consider “modern society.”  These old “medicine men” and “shamans” die before imparting their knowledge to a new generation and these kinds of treatments are lost.

The collection of species and formation of databases is interesting, and only more useful in recent history due to the advent of computers that can actually store and access all the information.  With this process, we’re talking about going into a rain forest, for example, and collecting every plant and insect species you can find, then running various genetic and proteomic screens on the cells of each plant and insect to see whether they produce anything interesting or respond to anything.  This process can involve thousands of species across a single square mile in a rain forest, necessitating a great deal of storage space for the samples themselves, but also computing power to allow other researchers the ability to search for information on that given species.

An example of a “screen” that one could carry out would be to grow bacteria around your plant or insect samples.  If you ever heard the story of penicillin, you’ll know that Alexander Fleming (1928) noticed that his culture of Staphlococcus bacteria stopped growing around some bread mold that had found its way into the culture.  From that bread mold, penicillin, was developed as our first antibiotic.  The same kind of principle can be applied here: mix your samples together and “see what happens.”  If anything interesting happens, you then continue investigating that sample until you isolate the compound that is doing that interesting thing.

The isolation of that “interesting compound” can be very tricky, however.  In many cases, a particular anticancer agent or antibacterial agent may be housed inside the cells of our plant species.  Getting that compound out may be difficult, as it could be associated with the plant so tightly that you have to employ a variety of separation techniques.  And even after you apply those techniques, what you are left with may be nonfunctional, as the compound may require the action of that plant itself to work properly (i.e. the compound you want may still need other components to work).  Even after you isolate the compound you want, in order to make it a viable drug, you have to be able to synthesize it, or something like it, chemically in a lab setting.  Preferably, on a massive scale so you can sell it relatively cheaply as a drug to the masses.  These processes can be daunting and costly.

So basically, it can be fascinating to discover new drugs, especially ones that were actually “discovered” thousands of years ago by cultures that have long since died out.  However, you may find that “discovering” the drug may be the easy part – mass producing the drug could be the most challenging aspect of the ordeal.

Primer: Scientific Funding

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

One would like to think that major universities spend their money on research for their various faculty members, but unfortunately for me, that typically isn’t the case.  Sure, there is a reasonable amount of money going to fund the research carried out by faculty members in biology, physics, and chemistry departments, but the reality is that in order for that research to occur, and moreover almost all of the important discoveries under the umbrella we call “Science,” money must come from sources other than the university.  In many cases, your tenure and rank at your given institution is determined by how much outside funding you bring in and where it comes from.

The majority of scientific funding in the United States comes from the Federal Government, mostly in the form of the National Institutes of Health (NIH) and, to a lesser degree, the National Science Foundation (NSF) and Department of Energy (DoE).  Scientific American did a great job recently summing up how much money goes into which pot at the Federal level with an easy-to-read graphic that I suggest you glance at.  Basically, the NIH gets $28.5 billion to divide amongst its various projects, including grants that professors and other individuals apply for.  The NSF gets $4.2 billion, and the DoE gets about $3.5 billion to devote to research.  For comparison’s sake, the Department of Defense gets $56.2 billion (excluding special funding in war-time).

Obviously, NIH is getting a substantial piece of that pie.  For the most part, if you are doing biomedical research like I am, then the NIH is the first place you apply to.  They will generally fund anything that you can tie to a disease or disorder.  Alternatively, NSF won’t touch any grant that even implies it could help with disease research, instead focusing on really basic research.  Chemists and Physicists can find applications in the NIH, but usually NSF and DoE (or others) are where they have to look for funding.  And that pot is much smaller than the NIH pot.

The process of applying in each agency varies, but for the most part, you go about it the following way:

  1. Find a grant application that applies to your research
  2. Write the application according to their explicit instructions
  3. Submit the grant by a given due date (usually a few times per year)
  4. The grant is assigned to a division of the agency and then further assigned to a committee
  5. The committee is made up of people who should know what they’re doing, and then rank each grant they get in a pile based on its merits, need, and contribution to science
  6. The committee is given a number of grants that they can fund (usually between 5-20% of total grants submitted)
  7. Funding is decided and you are notified of the decision

There are usually three decisions that can be made.  Either a). the funding agency can grant you the money and accept your project as-is; b). the agency can give your grant a rank or score and suggest you make some changes and resubmit it; or c). they can “triage” your grant, basically saying they didn’t even score it, and that it needs significant work to make the cut.  The committee in question will usually give you some kind of pointers as to why your grant was or wasn’t funded, but that experience will vary across agencies and committees.

The NIH has a few different grant series that you can apply for.  Some, like the one I applied for in early December, are considered “training grants.”  So in this case, the grant I applied for was a post-doctoral training grant (designated “F32”) that would pay my salary for 2-3 years, based on the project I outlined to them.  No equipment or anything would be paid for – just my subsistence.  Alternatively, the “Big Daddy” grant to get is designated “R01,” which is a big league research grant that awards up to $5 million to a researcher and their lab, paying for salaries, equipment, and even some travel money to conferences.  At many big academic institutions, you need to get an R01 before you can achieve tenure.  At some of them, you need two.  The going funding rate for these grants has been in the 8-10% range, which is pretty low.  It’s tough to get an R01 and you can spend a lot of your time writing these grants and trying to get them, rather than actually doing research.

There are alternatives to federal money, of course.  You could call these Private, or “Foundation Grants.”  These entities are frequently not-for-profit groups that are set up to fund research according to their specifications.  The Michael J. Fox Foundation for Parkinson’s Research is one you may have heard of.  The American Heart Association is another.  The grants these foundations fund are typically quite a bit smaller than those funded by the government, rarely reaching in the millions of dollars.  They are also quite competitive, and some could argue more competitive than federal funding.  Generally, you end up spreading yourself thinner across multiple foundation grants if that’s how you have to fund your lab, or you get a single federal grant (or two…).  It all depends on how large your operation is, how many people are under you, and how many projects you have running at a given time.

I’ll leave you with one last point about the funding of science (insert soap box here): the majority of scientific innovations and true breakthroughs come from the funding agencies listed above:  NIH, NSF and DoE.  Private Industry, such as Pfizer or Merck, carry out their own research and development programs, but they rely heavily on basic research carried out in academic settings.  They do this partially because these companies cannot patent what is published in a journal article by someone else, so they have to take other research, apply it to their own needs, and then create a patent that they can make money off of.  When federal funding for science drops or doesn’t even increase with inflation, that means that professors make less money and cannot afford to pay their workers.  That means that less basic research is done.  That means that Private Industry has to devote more money to R&D in order to make new discoveries.  That increases the amount of money they need to put into developing a drug (more on that in a future Primer…).  Finally, that means the drugs and treatments that then go to you cost more money, adding to the sky-rocketing health care costs we already have, mostly because that basic research that Private Industry did is now covered under a patent for 10 years and no one else can make money on it and compete.

Funding of science at the federal level is incredibly important.  It’s hard enough as it is to get a grant, and it is vitally important that the money NIH, NSF, DoE, etc. get does not decrease, but instead increases.  That’s where scientific innovation comes from in the United States.  It’s why people from all over the world come here to get a Ph.D. and do research.  Because the United States values innovation and discovery.

As well they should.

Primer: Drug Metabolism

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

I chose to work on this subject for December because I may end up teaching a lecture or two on metabolism in early February to pharmacy students.  Obviously I’ll go more in-depth with them, but that isn’t the purpose of these Primers: they are intended as introductions.

Merriam-Webster defines “metabolism” as such:

Metabolism –noun

a.  …the chemical changes in living cells by which energy is provided for vital processes and activities and new material is assimilated

b. the sum of the processes by which a particular substance is handled in the living body

This definition is all well and good, but we’re talking about a specific form of “metabolism” here, one that really is talking about the breakdown of a chemical compound not necessarily for the purpose of generating energy.

Wikipedia provides us with a separate definition for drug metabolism:

Drug metabolism is the biochemical modification of pharmaceutical substances by living organisms, usually through specialized enzymatic systems.

So when we’re talking about an individual, such as an athlete, that has a “strong metabolism,” we’re talking about related but separate processes from the ones typically involved in modification and removal of drugs from your system.

In general, drug metabolism consists of two separate processes known as Phases.  In Phase I metabolism, a given compound is broken down and typically inactivated (but not always, as we’ll see shortly).  It usually involves a specialized protein called an enzyme that removes a specific portion of the compound, rendering it pharmacologically inactive.  Phase II metabolism typically involves the addition of another molecule onto the drug in question, something we call a “conjugation reaction.”  This process serves to also increase the polarity of a given drug.  Usually, we think that Phase I reactions precede Phase II reactions, but not always.

When I say “polar,” I mean it in a sense similar to a planet, in that a planet has “poles” (e.g. north and south).  For the sake of simplification, you can also think of a magnet or a battery instead, with a “positive” pole and a “negative” pole.  In this fashion, chemicals also have a positive and negative charge, including chemicals like water:

In this case, the oxygen atom in water (i.e. H2O) is negative while the two hydrogen atoms are positive.  Therefore, water is polar: it has an end that is more positive and an end that is more negative.  Polar compounds are also considered “hydrophilic” (i.e. “water-loving”), mostly because these polar chemicals tend to dissolve readily in water.

There are examples of “hydrophobic” (i.e. water-fearing) chemicals as well, also known as non-polar.  You know how oil and water don’t mix?  That’s because oils like fats or lipids are hydrophobic and non-polar, made up of molecules that look kinda like these.

These are all examples of hydrophobic (non-polar) compounds, those that do not mix well with hydrophilic (polar) molecules like water.

The key to drug metabolism is to realize that most of your cells, and thus organs, are made up of lipids such as these, so if you have a drug that is particularly “lipophilic” (and thus, hydrophobic), then the drug is more likely to hang around in your body.  That is to say, a drug that is non-polar can hang around longer, affecting you for longer than you may otherwise want.  If you use a more polar drug (i.e. hydrophilic), it’s more likely to get passed out of your body much faster.  Much of your body’s ability to expel chemicals and metabolites depends on the ability of your kidney and liver to get those chemicals and metabolites into a form that works well with water, as water is what you typically get rid of (i.e. urine).

When your body recognizes a foreign compound, such as a drug, it wants to make that drug more polar so it can excrete it.  Thus, your liver contains a number of enzymes that do their best to make those foreign compounds more polar so you can get rid of it.

This process, obviously, impacts the ability of a drug to take action, which is why this process is important.  There’s a reason why drugs are introduced to your body orally (i.e. through the stomach/intestines), or intramuscularly, or intravenously.  If you were to take a drug orally, then it is subjected to what is termed as First-Pass Metabolism.  Typically, when you eat something, the nutrients from whatever you ate are taken up through the portal system and hit your liver before they hit your heart, which only then go on to the rest of your body.  Therefore, if you take Tylenol for a headache in a pill form, it some of it will be broken down in the liver before the heart gets it, and then it gets pumped to your brain to help with your headache.

Alternatively, you could take Tylenol intravenously, which bypasses the liver and thus gives you a full dose.  However, Tylenol is toxic in high doses, so you would never want to inject much of it (or any of it…there are better choices if that’s what you’re considering….) for fear that it could kill you.

The final concept to consider, aside from drug modification, polarity and first-pass metabolism, is how we could use this system to our advantage.  There are times when you take a drug, such as a benzodiazepine like valium (diazepam).  Valium, on its own, is very useful as a depressant, used to treat things from mania to seizures, however the act of drug metabolism produces metabolites that are also active (called, not surprisingly, active metabolites).  In the case of valium, it is broken down in the liver to nordiazepam, then temazepam and finally oxazepam.  Each one of these metabolites is active to some extent, which means that a single dose of valium will last for quite awhile as it’s broken down into other compounds that still affect you.

Sometimes, you can administer a non-active drug that then becomes active once it’s modified in your liver.  We call this a prodrug.  Codeine, for example, is modified by Phase I metabolism to its active form, morphine.  You typically administer morphine to someone intravenously, as it’s rapidly metabolized in the liver.  Codeine allows you to take advantage of your liver to give you morphine in a pill form, which you otherwise wouldn’t be able to do (as it would be broken down too far before it even hit your heart).

In short, drug metabolism is an extremely important process to consider when designing a drug.  You need to take ease of use and route of administration into account, you need to consider whether a drug has active metabolites or not, and you need to be aware of how hydrophilic/hydrophobic a drug is if you want it to remain in your body for any reasonable amount of time.

Primer: Structure of the Brain

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

I can’t say I’ve been excited about writing this one, as brain anatomy is, quite possibly, the most boring thing I can think of to write about.  I did a rotation at SLU in a lab that focuses on anatomy and how individual brain structures interact with one another, and that 6 week period was more than enough for me.  As that professor told me, it’s very important work that someone needs to do, even if it may not seem all that interesting.  This kind of work is how researchers have figured out which brain component “talks” to which other one(s), and how intertwined all these connections really are throughout the brain.

For the sake of this posting, I’ll simply point out that brain mapping has been carried out in a variety of ways.  Quite a bit of it has been done over decades when people would hit their heads.  If they would lose their memory, or their sense of smell, clinicians could localize the injury to a specific area of the head, then look at the brain post-mortem and see what happened.  Ultimately, they would find a lesion of dead tissue in that region that lead to the deficiency.  Similarly, the study of stroke victims also provided clues to the function of certain brain locations, as a stroke occurs when blood flow is cut off to an area of the brain, typically leading to damage.  Alternatively, modern science uses a series of stereotactic injections of traceable materials in mice, rats and primates that can be visualized in brain slices, showing that a series of neurons in one area are connected with neurons in a separate region of the brain.

It is through this work that certain pathways were elucidated, including the reward pathway (very important for drug addiction, gambling addiction, etc.); the movement pathway (mostly for Parkinson’s disease, but important for voluntary movement, in general); the sensory systems (how the visual cortex signals, the auditory cortex, etc.); the amygdala (figuring out what this structure did and where it went led to quite a few labotomies back in the day); and memory (signals transfered between the hippocampus, the reward system, and the cortex…very complicated network…).  It is through brain mapping like this that helped determine where everything connects together, and which areas are important.

While the human brain is a difficult nut to crack, it can be divided up into different portions.  For the sake of this little blurb, we’ll focus on the three primary divisions of the brain: the prosencephalon (forebrain), the mesencephalon (midbrain) and the rhombencephalon (hindbrain).

The prosencephalon, or forebrain, is further divided into the telencephalon and the diencephalon.  The telencephalon consists, primarily, of the cerebrum, which includes the cerebral cortex (voluntary action and sensory systems), the limbic system (emotion) and the basal ganglia (movement).  As you can see from that list, for the most part, the telencephalon is what constitutes what “you” are: your thoughts, your feelings, and your interaction with the world around you.  It’s where a lot of your processing happens.  The telencephalon in humans is quite a bit more developed than in other species, which is really what separates their brain from other, lesser developed species (i.e. the human telencephalon is what really separates them from a chimpanzee).  The diencephalon, on the other hand, consists of the thalamus, hypothalamus and a few other structures.  The thalamus and hypothalamus are very important for various regulatory functions, including interpretation of sensory inputs, regulation of sleep, and release of hormones to control eating, drinking, and body temperature.

The mesencephalon is comprised of the tectum and the cerebral peduncle.  The tectum is important for auditory and visual reflexes and tends to be more important in non-vertebrates, as they don’t have the developed cerebral cortex that humans do (more on that later).  The cerebral peduncle, on the other hand, is a mixed bag of “everything in the midbrain except the tectum.”  It includes the substantia nigra, which ties into the movement system and reward system.  I think it’s fair to say that, aside from these things, the function of the midbrain, overall, has yet to be fully determined.

The rhombencephalon is quite important, even though it’s probably the oldest part of the brain, from an evolutionary standpoint.  It includes the myelencephalon (medulla oblongata) and the metencephalon (pons and cerebellum).  The medulla oblongata is important for autonomic functions like breathing and heart function.  The pons acts primarily as a relay with functions that tie into breathing, heart rate/blood pressure, vomiting, eye movement, taste, bladder control and more.  Finally, the cerebellum is important for a feeling of “equilibrium,” allowing for coordination of movement and action, timing and precision.

As you may have noticed, if you go from back-to-front, you’ll get increasing complexity in brain function.  For example, the hindbrain is important for very basic things like breathing, heart rate, and coordinated movement.  These are functions that are important in nearly all organisms, but especially so all the way down to the smallest worm and insect.  Further up, the mesencephalon starts to work in further control of reward and initiation of voluntary movement, giving the organism voluntary control rather than solely reflexive control.  Then, the diencephalon starts acting like a primitive brain, working in regulatory functions and more complicated reflex action to help maintain the more complex organism that has been assembled.  And finally, the telencephalon yields the ultimate control over the organism, with things like memory, emotion, and greater interpretation of sensory inputs.  As the image above dictates, the hindbrain (to the right-hand side) remains a large portion of the brain in the rat and the cat, but the human forebrain (the top/left-most portion) gets much larger, relative to the hindbrain.  With that size comes greater development of brain structure and function.

So yeah, the brain is kinda complicated.  Actually, it’s really complicated and, for the most part, I do my best to ignore all of the complex wiring networks that occur within.  However, it is important work that needs to be done in order for surgeons to do what they do, and for neuropharmacologists to develop drugs that target some brain areas and not others.  For the most part, I’ll leave this research to more interested people…

Primer: Memory

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

The whole idea of “memory” has intrigued me for quite awhile, arguably before I was even that interested in science in general.  Part of this is my attraction to all things computers.  I think I build my first computer (rather, helped Dad build one…) back in the late-90s, and at that time, I began to understand all of the components that make it function.  The idea of “input/output,” the function of a central processing unit (CPU), RAM and hard drives…all of these things proved relatively easy to grasp, and in light of these general functions, it made my understanding of the brain a bit easier in the process.

Let’s think of it this way.  You interact with computers in different ways, but one way is with a keyboard.  You type something into the keyboard and the data you input is converted by the CPU into something that can be understood by the system, in this case, binary code (i.e. a series of “1s” and “0s”).  All of your inputs from the keyboard are stored in RAM for faster, short-term access.  If you click the “Save” button on whatever you’re doing, however, the data stored in RAM gets sent to the slower-access hard drive.  As you open programs, information is pulled off the hard drive and into RAM so that your CPU can process it faster, and then you and your keyboard can get at and interact with it.  This is why, in general, having more RAM speeds up your computer because it can pull larger and larger programs into RAM so your CPU can get at it easier, and thus, you can interact with it faster.

In very basic terms, your brain works the same way.  We have inputs in the form of our 5 senses.  The information from those senses gets encoded by your brain’s Cerebral Cortex and is stored temporarily in the Hippocampus (i.e. RAM) before being encoded for long-term storage back in other regions of the Cortex (i.e. hard drive).  Most of the time, your brain “Saves” it’s data to the Cortex at night, which is why sleeping is so very important.  The “processing” portion of this paradigm can be confusing, but keep in mind that the brain is divided up into specific regions.  There’s a “visual cortex,” an “auditory cortex,” etc.  These regions (within the Cortex…) interpret what each sense gives you and then sends that information through the Temporal and Parietal Lobes (also in the Cortex).  From there, the information is spread to the Hippocampus (i.e. RAM) for “integration” before being set as full, long-term memories out in the rest of the brain.

How is that information stored, you may ask?  Again, it’s much like a hard drive.  If you’ve used computers extensively, you know that hard drives are divided up into “sectors” (ever get a disc read error that says “bad sector?”).  When you have a new hard drive, you start with a clean slate.  As you install programs and add files, it gets filled up.  Once you delete something, that sector isn’t really “deleted,” but it is removed from your access: it isn’t really “deleted” until it’s overwritten by something else (which is why you can sometimes retrieve old files off a hard drive that you thought may have been deleted).  Whenever you “defragment” your hard drive, you are basically trying to rearrange those programs to keep everything closer together, and thus, quicker to access.  The data that’s encoded on the hard drive is done in “1s” and “0s” (i.e. binary code).  Each 1 or 0 is considered to be a “bit,” while a set of eight 1s and 0s (e.g. 11010101, 10011010, etc.) is considered a “byte.”  This is where “kilobytes,” “megabytes” and “gigabytes” come from.

The idea of 1s and 0s comes from logic, specifically the definitions of “True” (i.e. 1) and “False” (i.e. 0).  If you have a “1,” then you have a connection.  If you have a “0,” then you don’t.

Bringing this back to neuroscience, the same general rule appears to apply with regards to memories, or the concept of “learning” in general.  In order to form a memory, it needs to be encoded much like your hard drive is: in a series of combinations of connections (or missed connections) between neurons spanning the entire brain.  There are various molecular mechanisms that can account for these connections, or lack of connections, and those go back to receptor theory.  Remember that neurotransmission involves the release of a neurotransmitter (e.g. dopamine, adrenaline, etc.) from one neuron to bind with a receptor on another.  If a neuron stops receiving signals from another neuron, it will remove its receptors from the outside of the cell, thus limiting or negating the signal.  If, however, a neuron keeps getting increased signaling from an adjacent neuron, the subsequent neuron will increase the number of receptors on the outside of the cell, thus making it easier to signal.  Therefore, we have a mechanism for strengthening or weakening the connections between two neurons.

One could consider a “strengthened” neuronal connection to be a “1” and a “weakened” neuronal connection to be a “0.”  It is in this way, it is thought, that memories can be formed on a cell-to-cell basis.

These neurons that memories are stored in are located throughout the brain, similarly to “sectors” on your hard drive.  As you stop using certain memories, the synapses of those neurons weaken to the point where they can be, effectively, “overwritten” in favor of a new memory.  This is also how the idea of “repressed memories” can come about, in that you can have a memory stored in a region of your brain that you have forgotten about, but can re-manifest later: if it isn’t overwritten, it’s still there.

From a molecular standpoint, scientists have a pretty good idea how memory “works,” but being able to decode those memories is a whole different beast.  Returning to our computer metaphor, imagine knowing nothing about computers and finding a hard drive.  What would you do with it?  Would you take it apart?  How would you know what it was?  Or what it contained?  And once you figured out that it, somehow, contained information, how would you read it?  If you eventually found out that it involved 1s and 0s, how would you know how those 1s and 0s were organized across the hard drive, and then finally, what they told you?

This is why it’s highly unlikely that we’ll ever be able to make or see memories like we do in the movies, at least, not for a very long time.  It’s one thing to understand the basis for how it works, but it’s a whole other thing to try and figure out how it’s organized within a system like the human brain.  Also, it’s been estimated that the human brain contains terabytes of information, which translates to 8,000,000,000,000 to 8,000,000,000,000,000 individual 1s and 0s, or individual neuronal connections.

Imagine looking at a sheet (or multiple sheets…) of paper with that many 1s and 0s on it and trying to decide which version of Windows it represents.  Or where your dissertation is…not the Word Document, but the PDF.  That’s what we’re talking about.

So yeah, I just find the concept of memory to be fascinating.  With modern computers, we’re effectively reverse-engineering the human brain and, in doing so, learning more and more about how technological and biological computation can work.  But next time you see some “memory reading” device on TV, bear in mind what’s actually required to make that technology work.

Primer: The Scientific Method

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

There are quite a few things that go flying by in the news that concern me (and I have posted about them here…at…length…), but one that really gets to me is public misunderstanding of Science.  As in, capital “S” Science.  Not really the fact that many people don’t know certain scientific facts, or don’t really understand how many things work, but more that they do not understand how science is done and what it really means.  I will seek to clear up some of that here.

First, however, what does tell us?

Science – noun

1. a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws: the mathematical sciences.
2. systematic knowledge of the physical or material world gained through observation and experimentation.
3. any of the branches of natural or physical science.
4. systematized knowledge in general.
5. knowledge, as of facts or principles; knowledge gained by systematic study.

Now, this definition seems to center upon the natural/physical sciences, however many, if not all, of the principles that “science” adheres to apply to the social sciences (e.g. sociology, psychology, etc.) and to many other degrees.  However, I will focus on what I know best.

“Systematically” is the word sprinkled about in the definition above, and rightfully so.  “Systematically” refers to how science is conducted, generally through what we refer to as the scientific method.  The Wikipedia article, as usual, is a good start for further information on this particular subject, but basically, here’s how it works:

  1. Formulate a hypothesis
  2. Test the hypothesis through experimentation and observation
  3. Use collected data to confirm or refute the initial hypothesis
  4. Form a new hypothesis based on what was learned in steps 1-3

A “hypothesis,” put simply, is an educated guess toward a question you have.  Many times, especially when you’re first learning the scientific method, you may phrase it in the form of an “If/Then” statement.  For example:

If I drop this rock, then it will fall

The “If” portion of the above statement represents the “Independent Variable,” while the “Then” portion represents the “Dependent Variable.”  Effectively, the Dependent Variable is what you’re measuring and the Independent Variable is what you’re changing in the system.  In this particular case, if you drop the rock, does it fall or not?  You can measure whether or not it falls.  If you don’t drop the rock, does it still fall?  And so on.  It is called the Dependent Variable because it “depends” on what you do in the Independent Variable.

You are generally allowed to have multiple Independent Variables in a given hypothesis (or series of hypotheses), but the Dependent Variable cannot change. What would happen if I dropped a rock on Earth and dropped another one on Mercury?  My results wouldn’t be comparable, because I changed too many things.  I could change the size of the rock, but if I’m measuring the rate at which the rock falls to the ground, I need to make sure the force of gravity is held constant.

Obviously, this is a very simple example.  If one were to ask something a bit more complicated, you could ask the following:

If Tylenol is administered to people with headaches, then they will experience pain relief.

The question above seems simple enough, right?  I could just give Tylenol to a bunch of people with headaches and see if we get an effect.  Then I would know if my hypothesis was correct or if it wasn’t.  But what would happen if I grabbed people prone to migraine headaches were participating in my study?  Or alcoholics (that don’t break down Tylenol all that well)?  The data I would receive would be flawed, as the Tylenol probably wouldn’t do anything to people with migraines and it may actually make alcoholics feel worse.  My hypothesis would be proven wrong.

Here is where we really need to consider “Controls.”  These are a separate series of experiments that you use to compare your experimental results to.  You may choose to set this up in your experiment in a variety of ways, but one possibility is to give those with migraines or the alcoholics (and all other test subjects) a “placebo,” or something that looks like Tylenol, but is actually inert.  Then, you can compare your responses to see if Tylenol had any effect or not.

Above, I mention that after you formulate a hypothesis, you must test it.  You must test it by holding as many things constant as you can while only varying a specific aspect of the experiment, especially an aspect that you can control to some degree.  This brings us to the idea of “testability.”  In order for your experiment to be considered “Scientific,” it must be testable.  If it isn’t “testable,” then it doesn’t satisfy the “systematic” part of the definition.

Over time, enough experiments are done to warrant considering a certain concept to be a “Scientific Theory.”  That is to say, a Theory is an idea that is supported by an array of evidence and co-exists with other known Theories that are equally verified by experimentation.  Assuming a Theory stands the test of time, it eventually is considered to be a “Scientific Law,” meaning it represents something truly fundamental on which the rest of science and knowledge rests.  An example of a Theory is “The Theory of Natural Selection.”  An example of a Law is “Newton’s Laws of Thermodynamics.”  Wikipedia also has a nice list of other Scientific Laws.

Most Laws tend to be Physics/Chemistry-related, as these are the bedrock concepts upon which everything else stands.  You can’t really study Biology without fluid dynamics and quantum mechanics (well, you can ignore them for the most part, but they do get involved in certain situations).  Theories, on the other hand, are much less clear cut.  They tend to represent a constantly evolving field of research, where new data is being applied every day.  I will steal the US National Academy of Sciences definition to explain more fully:

Some scientific explanations are so well established that no new evidence is likely to alter them. The explanation becomes a scientific theory. In everyday language a theory means a hunch or speculation. Not so in science. In science, the word theory refers to a comprehensive explanation of an important feature of nature supported by facts gathered over time. Theories also allow scientists to make predictions about as yet unobserved phenomena.

A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not “guesses” but reliable accounts of the real world. The theory of biological evolution is more than “just a theory.” It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.

So in some ways, a Theory is treated on almost the same plane as a Law, but they really aren’t the same thing. A Theory can still be modified, while a Law is much, much harder to change.  In that first sentence, it says “no new evidence is likely to alter,” meaning you could still alter it, but it’s highly unlikely.

My overall concern with perceptions of what Science is stem from the various debates over climate change, evolution, stem cell research, etc.  In many ways, much of the political hubbub is regarding something that Science isn’t equipped to answer.  By definition, it can only give you a fact – it is up to the individual to decide how to apply their morals to that fact.  Science can tell you that Evolution is happening and that Natural Selection is the current Theory to describe how it happens.  It’s a “Theory” because more data is getting added every day, but the Theory is only strengthened, not weakened.  Overall, Natural Selection is what happens.  End of story.  Scientifically, embryonic stem cells come from an embryo, which is a collection of cells that does not fit the accepted definition of “alive” (i.e. self-awareness, self-preservation, consciousness).  Whether or not you agree that an embryo is not alive is up to you to decide, but arbitrarily suggesting that “Science says that it’s a life” is incorrect and a misuse of the term.  Saying that there are “gaps in the geological record,” so that must mean that God exists and God created the Earth in 6 days, ignores how Science works – God is, by nature, “untestable,” and therefore beyond the purview of Scientific understanding.  These are but a few of the examples of how some would misunderstand Science and try to apply it to things that it shouldn’t be applied to, or at least in ways it shouldn’t be applied.

The Study of Science is a systematic, logical progression that involves the formulation of a testable hypothesis, where testing involves experimentation, observation and collection of data to support or refute the hypothesis.  Hypotheses around a general subject can eventually add up to a Theory, and truly fundamental observations of the natural world become Law.  That’s all it is, folks.  No more.  No less.