Step 2: Prep Work

The beer kit itself comes with various components, some of which are consistent across kits and other components that are specific to the variety you are making.  In this case, we’re making a Honey Brown, so it has a few “extras” to it.  The most important components that come with each kit are:

  • Malt extract – the sugar that the yeast end up acting on for fermentation
  • Hops – gives beer it’s “flavor” and the bitter taste you find in many Pale Ales
  • Yeast – dry or liquid
  • Priming sugar – regular ol’ sugar used in the bottling process

You’ll see that there are a few extra components in this kit, including a “mixed grain” product that we will steep in the water prior to the boiling of the malt, as well as honey for the, you know, “Honey” part of “Honey Nut Brown.” The assorted grains include chocolate, as well, providing another interesting, yet subtle, flavor for the beer.

The kit arrived at home while I was at work, but Brooke was kind enough to remove the liquid yeast from the packaging.  The yeast is the only component (usually) that needs to be refrigerated until you’re ready to prime them, but Brooke thankfully bypassed that and went ahead and got them started.  You’ll see that it comes in a little bag that looks flat, yet after you break a small ampule on the inside of the bag (by smacking the bag with your hand)…

…you get this within a few hours sitting out on our porch in the sun (i.e. it needs a relatively warm place for this part). As Brooke points out, you are effectively just “priming” the yeast as you would with any bread recipe. If you get dry yeast, you have to prime them like you do bread, but if you get the liquid yeast, you do it all in one cute little packet. Once it blows up to this level, though, you can use it.

The rest of the kit is pretty straightforward. Technically, this part is a separate kit: I ordered a “Brewing Kit” (pictured here) and then the actual “Beer kit” (first picture above), so they were actually different products. This bottom one is the portion I will re-use for other beer varieties.  I’ll probably hit up these different components as I use them in this series of posts, but I’ll point out a few items now:

  • Two buckets – one for fermenting and one for priming and bottling
  • Plastic tubing – mostly for use in the bottling process
  • Bottle capper and caps –  so you can save any ol’ non-twist top beer bottles and re-cap them with this system.  Woooo, recycling!
  • Cleaning solution – ’cause you need to ferment your beer in clean stuff.

In the next post, I’ll show some pictures of the actual brewing process, but bear in mind that these steps are very, very important.  The yeast need to be ready before you can start the brewing process, so a few hours need to be allotted to allow them to prime.  Secondly, all the equipment pictured above must be sterile, otherwise you can introduce some bad flavors to your beer.  I’m not going to show pictures of the sterilization process, as that would be very boring, but just keep in mind that any item that comes in contact with your beer needs to be sterilized.  You can’t over-sterilize your equipment.

The Digital Generation

I was listening to NPR’s OnPoint podcast from November 2nd, where Tom Ashbrook was interviewing Douglass Rushkoff on his “Rules for the Digital Age,” discussing Rushkoff’s new book “Program or be Programmed: Ten Commands for a Digital Age.” The discussion bounced around quite a few topics, but largely focused on the thought that people today take their digital presence for granted and that people interact with digital media in such a way that they don’t control the outcome, but instead they are controlled by their digital media.

For example, Rushkoff recounts a story from their PBS “Frontline” documentary, “Digital Nation,” where the producers ask a child: “What is Facebook for?”  The kid’s answer was “for making friends.”  It’s a relatively simple answer, and one that many adults would also provide, yet the true answer is “to make money off of the relationships, likes and dislikes of its users.”

As another example, Rushkoff says that students when we were growing up decades ago would go to the World Book or Encyclopedia Britannica in order to get a “primary source” for our book reports.  Now, for many people, simply using Google is “good enough” to find the information you want.  If you use Google’s Instant Search option, introduced a few months ago, your search results change by the second and are largely influenced by traffic on those sites, yet Google is perfectly capable of adjusting the results so that some pages show up first and others don’t.  For many users, they’re just “The Results” that they get, however the user typically doesn’t think about the vested interest that Google has, as a company, in making money off of their Search ventures.

Rushkoff’s solution, outlined in his “10 Rules,” is generally that people should be more computer literate.  He says that kids today that take a computer class in junior high or high school learn Microsoft Office.  To him, that’s not “computers,” but instead it’s “software.”  You aren’t learning how a computer works.  You aren’t learning about what programming had to go into those programs.  You aren’t learning about the types of programs available (i.e. closed-source vs open-source).  You simply accept what you are given as Gospel without critically thinking.

As I listened to the discussion, especially with regards to Google, I had to think about this past election which saw the rise of the Tea Party.  While many of them would have you believe that they were all educated, intelligent, active people, so many of them were taken advantage of by other third-party groups, primarily corporations.  These are individuals that believed what they found in Google searches without thinking critically about what they were discussing.  Rachel Maddow did an interview in Alaska discussing the Senate race of Tea Party favorite Joe Miller (who lost…), and the supporters outside were angry about all the policies that Attorney General Eric Holder had supported, and his voting record prior to becoming A.G.  Of course, Maddow points out that Holder never held public office, and thus had no voting record.  But these people believed it because that’s what they were told.  It’s what they read on the internet.  As if “The Internet” is to be equated with the Encyclopedia Britannica of old.

Rushkoff’s larger point, in my view, is that people today simply don’t have the critical thinking skills to handle what digital media has provided.  So much information is now provided with so many more sources that individuals can’t effectively wade through it and discern whether what they are reading is fact or fiction.

I’m not sure that a better understanding of computers alone would be enough to combat the problem, honestly.  Rushkoff suggests that some basic programming skills would be helpful for people to know as well, much as people thousands of years ago had to learn to write when “text” was invented.  He believes that the invention of text empowered people to write laws, to hold each other accountable, and to be more than they were.  He believes that giving everyone basic programming skills would do something similar, where they would be more likely to know and understand why a computer does what it does, and how the programs on your system interact with programs on the internet as a whole.  I barely have any programming training and I think I’ve got a relatively decent handle on how the internet works, but most of that was self-taught over nearly two decades.  I certainly don’t think it would hurt to have kids learn some basic programming, but they’re already missing the boat in various other subjects that programming is surely on the bottom of the list.

To me, it’s the critical thinking part that needs to be improved.  With some basic critical thinking skills, hopefully, people would be more informed about everything they do in their daily lives: in raising their children, in voting for elected offices, in thinking about where their food comes from, in choosing which car to drive, in where they get their information, and so on.

But hey: if people want to learn more about computers, I’m all for it.

P.S. Happy birthday, Mom.  🙂

Step 1: Buy Some Beer

I woke up Saturday morning to find out I got my paycheck a few days early (!!!!), so I went ahead and got me a beer kit.  My boss, Dr. Doorn, had suggested a company that he’s gone through in the past called Northern Brewer, based out of Minnesota/Michigan.  He pointed out that they’ve got a pretty good variety of beers (he’s right…) and, perhaps most importantly, their close location means that shipping happens quite rapidly, so you don’t end up waiting for your package to arrive for a week or more as I would, perhaps, have to with William’s Brewing.  When comparing the two, it seems like their kits are very comparable in build and price, but Northern does seem to have a wider variety of beer options (94 options at the time of this writing), and you get to choose what kind of yeast you want (e.g. dry, liquid) and what kind of priming sugar.  Otherwise, everything else comes in each kit.

I got the cheaper set for now, as my Dad still has a few glass carboys from when he made wine a few years back.  If I decide to go that route, I can certainly do so, but for now, I’ll stick with my tried-and-true method.  For my first beer, I decided to go with a Honey Brown Ale (pictured above).  I went with that one for a few reasons, but one of them is that, compared with the other options, it should be ready relatively soon (close to 4 weeks).  Also, if you’ve never had one, a Honey Brown beer variety (assuming I do it right…) ends up being pretty smooth, not very bitter, and has a sweet flavor to it.  Therefore, hopefully, it’ll have a relatively wide appeal at Thanksgiving/Christmas gatherings this Fall/Winter.  For my next one, I’ll probably go with something more “hoppy,” which is the style of beer I tend to gravitate toward anymore.

As the title of the post implies, I’ll be writing these in a series of “Steps” as I go through the process, and as such, I’m completing a few things right now before the beer is even here.  One is measuring the temperature in my intended brewing location: the unfinished, cellar-like basement of our house.  I’m recording the temperatures 3-4 times a day at varying times in hopes of getting an idea as to how stable the temperature will be.  The “cellar-like” part should hold stable, but that is where our furnace is and our washer/dryer, so I’m not sure how the “swings” will affect the brewing process.  Typically, you want your fermentation to occur in a relatively stable environment: not too cold, not too hot, but also not swinging wildly between extremes.  When I did did some brewing back in undergrad, we noticed that the yeast could be “shocked” into inactivity if the temperature dropped too far.  That meant the yeast, effectively, stopped doing what I wanted them to do: make alcohol and, consequently, beer.  So between last night and this morning, the temperature was hovering between 56 F and 60 F, and that’s fine by me.  Again, the yeast can handle temperatures in a variety of ranges, but they don’t like their temperatures being shifted around.  I could probably brew in the upper-40s to low-50s and be fine (with the right kind of yeast…), but the fermentation process would just be slower than it would in the upper-60s to low-70s.

Secondly, I’m collecting bottles.  Most beer kits come with a capper and bottle caps, the latter of which you can always purchase more of for relative cheapness.  We’ll slowly collect “interesting” 12 oz bottles, but basically we’re sticking with those that don’t have markings on the glass itself, like Sam Adams bottles or New Belgium bottles do.  We’ve got 24 of those, which should hold over 2 gal of beer.  I’ve also got two 2 L bottles, and nine 1 L bottles, all of which have reusable tops on them, so they don’t require capping.  Those should hold nearly 3 gal of beer, bringing me up to the 5 gal of total storage I will need.  I’ll probably try and keep a good mix like that, keeping most of the beer in the 1 L bottles, but making enough in the 12 oz bottles to either give away or take to gatherings in single-serving amounts.  We’ll probably collect more of those 12 oz bottles over time, but for now, we’ve got enough.

So hopefully the kit ships today or tomorrow and I’ll have it this week, and assuming all goes according to plan (which rarely happens, I realize…), I should have something quasi-drinkable by Thanksgiving.  The carbonation process will not have had much time around Thanksgiving, as that’s a bit over 3 weeks away), but this variety of beer shouldn’t require all that much carbonation, anyway.  It all depends on how the yeast do in the basement environment and whether they keep fermenting at a good pace.  We’ll see!

Primer: Memory

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

The whole idea of “memory” has intrigued me for quite awhile, arguably before I was even that interested in science in general.  Part of this is my attraction to all things computers.  I think I build my first computer (rather, helped Dad build one…) back in the late-90s, and at that time, I began to understand all of the components that make it function.  The idea of “input/output,” the function of a central processing unit (CPU), RAM and hard drives…all of these things proved relatively easy to grasp, and in light of these general functions, it made my understanding of the brain a bit easier in the process.

Let’s think of it this way.  You interact with computers in different ways, but one way is with a keyboard.  You type something into the keyboard and the data you input is converted by the CPU into something that can be understood by the system, in this case, binary code (i.e. a series of “1s” and “0s”).  All of your inputs from the keyboard are stored in RAM for faster, short-term access.  If you click the “Save” button on whatever you’re doing, however, the data stored in RAM gets sent to the slower-access hard drive.  As you open programs, information is pulled off the hard drive and into RAM so that your CPU can process it faster, and then you and your keyboard can get at and interact with it.  This is why, in general, having more RAM speeds up your computer because it can pull larger and larger programs into RAM so your CPU can get at it easier, and thus, you can interact with it faster.

In very basic terms, your brain works the same way.  We have inputs in the form of our 5 senses.  The information from those senses gets encoded by your brain’s Cerebral Cortex and is stored temporarily in the Hippocampus (i.e. RAM) before being encoded for long-term storage back in other regions of the Cortex (i.e. hard drive).  Most of the time, your brain “Saves” it’s data to the Cortex at night, which is why sleeping is so very important.  The “processing” portion of this paradigm can be confusing, but keep in mind that the brain is divided up into specific regions.  There’s a “visual cortex,” an “auditory cortex,” etc.  These regions (within the Cortex…) interpret what each sense gives you and then sends that information through the Temporal and Parietal Lobes (also in the Cortex).  From there, the information is spread to the Hippocampus (i.e. RAM) for “integration” before being set as full, long-term memories out in the rest of the brain.

How is that information stored, you may ask?  Again, it’s much like a hard drive.  If you’ve used computers extensively, you know that hard drives are divided up into “sectors” (ever get a disc read error that says “bad sector?”).  When you have a new hard drive, you start with a clean slate.  As you install programs and add files, it gets filled up.  Once you delete something, that sector isn’t really “deleted,” but it is removed from your access: it isn’t really “deleted” until it’s overwritten by something else (which is why you can sometimes retrieve old files off a hard drive that you thought may have been deleted).  Whenever you “defragment” your hard drive, you are basically trying to rearrange those programs to keep everything closer together, and thus, quicker to access.  The data that’s encoded on the hard drive is done in “1s” and “0s” (i.e. binary code).  Each 1 or 0 is considered to be a “bit,” while a set of eight 1s and 0s (e.g. 11010101, 10011010, etc.) is considered a “byte.”  This is where “kilobytes,” “megabytes” and “gigabytes” come from.

The idea of 1s and 0s comes from logic, specifically the definitions of “True” (i.e. 1) and “False” (i.e. 0).  If you have a “1,” then you have a connection.  If you have a “0,” then you don’t.

Bringing this back to neuroscience, the same general rule appears to apply with regards to memories, or the concept of “learning” in general.  In order to form a memory, it needs to be encoded much like your hard drive is: in a series of combinations of connections (or missed connections) between neurons spanning the entire brain.  There are various molecular mechanisms that can account for these connections, or lack of connections, and those go back to receptor theory.  Remember that neurotransmission involves the release of a neurotransmitter (e.g. dopamine, adrenaline, etc.) from one neuron to bind with a receptor on another.  If a neuron stops receiving signals from another neuron, it will remove its receptors from the outside of the cell, thus limiting or negating the signal.  If, however, a neuron keeps getting increased signaling from an adjacent neuron, the subsequent neuron will increase the number of receptors on the outside of the cell, thus making it easier to signal.  Therefore, we have a mechanism for strengthening or weakening the connections between two neurons.

One could consider a “strengthened” neuronal connection to be a “1” and a “weakened” neuronal connection to be a “0.”  It is in this way, it is thought, that memories can be formed on a cell-to-cell basis.

These neurons that memories are stored in are located throughout the brain, similarly to “sectors” on your hard drive.  As you stop using certain memories, the synapses of those neurons weaken to the point where they can be, effectively, “overwritten” in favor of a new memory.  This is also how the idea of “repressed memories” can come about, in that you can have a memory stored in a region of your brain that you have forgotten about, but can re-manifest later: if it isn’t overwritten, it’s still there.

From a molecular standpoint, scientists have a pretty good idea how memory “works,” but being able to decode those memories is a whole different beast.  Returning to our computer metaphor, imagine knowing nothing about computers and finding a hard drive.  What would you do with it?  Would you take it apart?  How would you know what it was?  Or what it contained?  And once you figured out that it, somehow, contained information, how would you read it?  If you eventually found out that it involved 1s and 0s, how would you know how those 1s and 0s were organized across the hard drive, and then finally, what they told you?

This is why it’s highly unlikely that we’ll ever be able to make or see memories like we do in the movies, at least, not for a very long time.  It’s one thing to understand the basis for how it works, but it’s a whole other thing to try and figure out how it’s organized within a system like the human brain.  Also, it’s been estimated that the human brain contains terabytes of information, which translates to 8,000,000,000,000 to 8,000,000,000,000,000 individual 1s and 0s, or individual neuronal connections.

Imagine looking at a sheet (or multiple sheets…) of paper with that many 1s and 0s on it and trying to decide which version of Windows it represents.  Or where your dissertation is…not the Word Document, but the PDF.  That’s what we’re talking about.

So yeah, I just find the concept of memory to be fascinating.  With modern computers, we’re effectively reverse-engineering the human brain and, in doing so, learning more and more about how technological and biological computation can work.  But next time you see some “memory reading” device on TV, bear in mind what’s actually required to make that technology work.

Primer: The Scientific Method

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

There are quite a few things that go flying by in the news that concern me (and I have posted about them here…at…length…), but one that really gets to me is public misunderstanding of Science.  As in, capital “S” Science.  Not really the fact that many people don’t know certain scientific facts, or don’t really understand how many things work, but more that they do not understand how science is done and what it really means.  I will seek to clear up some of that here.

First, however, what does Dictionary.com tell us?

Science – noun

1. a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws: the mathematical sciences.
2. systematic knowledge of the physical or material world gained through observation and experimentation.
3. any of the branches of natural or physical science.
4. systematized knowledge in general.
5. knowledge, as of facts or principles; knowledge gained by systematic study.

Now, this definition seems to center upon the natural/physical sciences, however many, if not all, of the principles that “science” adheres to apply to the social sciences (e.g. sociology, psychology, etc.) and to many other degrees.  However, I will focus on what I know best.

“Systematically” is the word sprinkled about in the definition above, and rightfully so.  “Systematically” refers to how science is conducted, generally through what we refer to as the scientific method.  The Wikipedia article, as usual, is a good start for further information on this particular subject, but basically, here’s how it works:

  1. Formulate a hypothesis
  2. Test the hypothesis through experimentation and observation
  3. Use collected data to confirm or refute the initial hypothesis
  4. Form a new hypothesis based on what was learned in steps 1-3

A “hypothesis,” put simply, is an educated guess toward a question you have.  Many times, especially when you’re first learning the scientific method, you may phrase it in the form of an “If/Then” statement.  For example:

If I drop this rock, then it will fall

The “If” portion of the above statement represents the “Independent Variable,” while the “Then” portion represents the “Dependent Variable.”  Effectively, the Dependent Variable is what you’re measuring and the Independent Variable is what you’re changing in the system.  In this particular case, if you drop the rock, does it fall or not?  You can measure whether or not it falls.  If you don’t drop the rock, does it still fall?  And so on.  It is called the Dependent Variable because it “depends” on what you do in the Independent Variable.

You are generally allowed to have multiple Independent Variables in a given hypothesis (or series of hypotheses), but the Dependent Variable cannot change. What would happen if I dropped a rock on Earth and dropped another one on Mercury?  My results wouldn’t be comparable, because I changed too many things.  I could change the size of the rock, but if I’m measuring the rate at which the rock falls to the ground, I need to make sure the force of gravity is held constant.

Obviously, this is a very simple example.  If one were to ask something a bit more complicated, you could ask the following:

If Tylenol is administered to people with headaches, then they will experience pain relief.

The question above seems simple enough, right?  I could just give Tylenol to a bunch of people with headaches and see if we get an effect.  Then I would know if my hypothesis was correct or if it wasn’t.  But what would happen if I grabbed people prone to migraine headaches were participating in my study?  Or alcoholics (that don’t break down Tylenol all that well)?  The data I would receive would be flawed, as the Tylenol probably wouldn’t do anything to people with migraines and it may actually make alcoholics feel worse.  My hypothesis would be proven wrong.

Here is where we really need to consider “Controls.”  These are a separate series of experiments that you use to compare your experimental results to.  You may choose to set this up in your experiment in a variety of ways, but one possibility is to give those with migraines or the alcoholics (and all other test subjects) a “placebo,” or something that looks like Tylenol, but is actually inert.  Then, you can compare your responses to see if Tylenol had any effect or not.

Above, I mention that after you formulate a hypothesis, you must test it.  You must test it by holding as many things constant as you can while only varying a specific aspect of the experiment, especially an aspect that you can control to some degree.  This brings us to the idea of “testability.”  In order for your experiment to be considered “Scientific,” it must be testable.  If it isn’t “testable,” then it doesn’t satisfy the “systematic” part of the definition.

Over time, enough experiments are done to warrant considering a certain concept to be a “Scientific Theory.”  That is to say, a Theory is an idea that is supported by an array of evidence and co-exists with other known Theories that are equally verified by experimentation.  Assuming a Theory stands the test of time, it eventually is considered to be a “Scientific Law,” meaning it represents something truly fundamental on which the rest of science and knowledge rests.  An example of a Theory is “The Theory of Natural Selection.”  An example of a Law is “Newton’s Laws of Thermodynamics.”  Wikipedia also has a nice list of other Scientific Laws.

Most Laws tend to be Physics/Chemistry-related, as these are the bedrock concepts upon which everything else stands.  You can’t really study Biology without fluid dynamics and quantum mechanics (well, you can ignore them for the most part, but they do get involved in certain situations).  Theories, on the other hand, are much less clear cut.  They tend to represent a constantly evolving field of research, where new data is being applied every day.  I will steal the US National Academy of Sciences definition to explain more fully:

Some scientific explanations are so well established that no new evidence is likely to alter them. The explanation becomes a scientific theory. In everyday language a theory means a hunch or speculation. Not so in science. In science, the word theory refers to a comprehensive explanation of an important feature of nature supported by facts gathered over time. Theories also allow scientists to make predictions about as yet unobserved phenomena.

A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not “guesses” but reliable accounts of the real world. The theory of biological evolution is more than “just a theory.” It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.

So in some ways, a Theory is treated on almost the same plane as a Law, but they really aren’t the same thing. A Theory can still be modified, while a Law is much, much harder to change.  In that first sentence, it says “no new evidence is likely to alter,” meaning you could still alter it, but it’s highly unlikely.

My overall concern with perceptions of what Science is stem from the various debates over climate change, evolution, stem cell research, etc.  In many ways, much of the political hubbub is regarding something that Science isn’t equipped to answer.  By definition, it can only give you a fact – it is up to the individual to decide how to apply their morals to that fact.  Science can tell you that Evolution is happening and that Natural Selection is the current Theory to describe how it happens.  It’s a “Theory” because more data is getting added every day, but the Theory is only strengthened, not weakened.  Overall, Natural Selection is what happens.  End of story.  Scientifically, embryonic stem cells come from an embryo, which is a collection of cells that does not fit the accepted definition of “alive” (i.e. self-awareness, self-preservation, consciousness).  Whether or not you agree that an embryo is not alive is up to you to decide, but arbitrarily suggesting that “Science says that it’s a life” is incorrect and a misuse of the term.  Saying that there are “gaps in the geological record,” so that must mean that God exists and God created the Earth in 6 days, ignores how Science works – God is, by nature, “untestable,” and therefore beyond the purview of Scientific understanding.  These are but a few of the examples of how some would misunderstand Science and try to apply it to things that it shouldn’t be applied to, or at least in ways it shouldn’t be applied.

The Study of Science is a systematic, logical progression that involves the formulation of a testable hypothesis, where testing involves experimentation, observation and collection of data to support or refute the hypothesis.  Hypotheses around a general subject can eventually add up to a Theory, and truly fundamental observations of the natural world become Law.  That’s all it is, folks.  No more.  No less.

Primer: Neurotransmission

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

As I’ve mentioned…oh…countless times, I became interested in my chosen field primarily because of a class titled “Psychopharmacology,” offered by the Psychology Department at Truman.  As the name suggests, the class primarily focused on how drugs modify an individual’s mental state, whether it’s an illicit drug that changes the way you act (e.g. methamphetamine), or one that’s used to help you cope as you carry out your day (e.g. diazepam [Valium]).

Back in June, I posted about Pharmacology, the study of how a drug acts within an organism.  One thing I discussed, but did not elaborate on, was that many drugs function at receptors, and the modification of these receptors is what gives you the desired effect of said drug.  However, in order to understand how these receptors actually do something to your body, you need to understand the basics of how neurotransmission works.

Basically, neurotransmission is a signal sent between two specialized cells called neurons.  These cells make up a large portion of the brain (i.e. there are other cell types, including astroglia and microglia) and provide all the processing power you need to carry on with whatever task you wish.  Therefore, if you want to modify something about that task, these are important cells to consider and/or target with a drug.  Neurons take advantage of channels in their membranes that allow selective transfer of ions like sodium, potassium, chloride and calcium.  When these ions cross the membrane from outside the neuron to the inside (or vice versa), an electrical charge is produced.  These channels open and close selectively to allow certain things through, and keep other things out.  For example, sodium channels in neurons typically allow sodium into the cell, while potassium channels tend to allow potassium to leave the cell.

Many of the receptors that drugs are targeted toward are channels, or the drug-targeted receptors somehow affect the ability of channels to open or close.  Therefore, if you can target your drug toward a specific channel, you can keep it open longer, or close it sooner, allowing you to affect whether a neuron is able to continue propagating its signal.

So, the electrical signal caused by transfer of ions across a neuron’s cell membrane (or “action potential“) travels down the neuron, from end to end.  On one end is the “cell body” (or “soma”) and on the other end is the “axon terminal.”  The electrical signal always goes from the cell body to the axon terminal.  The cell body is covered in “dendrites,” outcroppings of the cell that receive a signal from another neuron’s axon terminal.  Therefore, typically, (1) a signal will start at the dendrites; (2) travel down the axon; (3) trigger a set of events in the axon terminal resulting in (4) the release of a neurotransmitter that (5) crosses the synapse until it reaches another dendrite and (1) starts the process over again.

What happens between the axon and the dendrite can best be described by this image, stolen from Wikipedia:

Neurotransmitters are packaged in “vesicles” that are directed to release their contents into the synaptic cleft where they travel across the cleft to the opposing dendrite, setting off a similar cascade in the next neuron.  There are also “reuptake transporters” in the cleft to help remove excess neurotransmitter, so you don’t have that opposing neuron continuing to fire too long.

Examples of neurotransmitters include dopamine, adrenaline (epinephrine), acetylcholine, nicotine and serotonin.

Now, you probably recognize a few of those neurotransmitters, right?  For example, you probably know that serotonin happens to be very important to your mood.  If you don’t have serotonin, you tend to get depressed.  So what can you do to help combat this deficiency?  Try taking an SSRI (selective serotonin reuptake inhibitor).  That drug targets the “reuptake transporter” in the cleft, allowing the serotonin you’re already making to stay in the cleft longer, helping to activate those neurons to keep your mood a bit happier.

You’d use an SSRI to help serotonin to reach its target neuronal receptors, thereby allowing for increased signal propagation through neurons.  But what if you want to limit propagation of signals, for example, in the case of an epileptic seizure when neurons are firing uncontrollably?  You can use a depressant like carbamazepine.  This drug targets channels and modifies them in such a way that the electrical signal (“action potential“) being sent down the axon is limited, or “depressed.”  It prevents the signal from continuing and, therefore, less (or no) neurotransmitter is released into the synapse.  That same drug can be used to help treat the manic symptoms of bipolar disorder, as well.

So, all of these principles are taken into account (as well as countless others…) when looking for drug targets, and when doctors are prescribing medications.  This is why you can have so many complications when you are prescribed a cocktail of medications, especially when you get older.  If you are taking, say, 10 different medications per day, prescribed by different doctors, it is easy for at least one of those drugs to counteract the effects of another.  There are many factors to consider when prescribing or taking these kinds of medications, as they have effects all over the body.  One simple example is methamphetamine.  This drug targets that reuptake transporter, much like an SSRI does, but it (1) does so for a class of neurotransmitters called catecholamines, and (2) reverses the transporter, rather than blocks it.  The class of catecholamines include dopamine and adrenaline.  So, if you take methamphetamine, you will be increasing the amount of dopamine and adrenaline in your body, not just your brain.  Your heart races because of the adrenaline, and the psychological effects occur because of the dopamine (including its addictive qualities).

In summary, neurotransmission is pretty complicated, but its basics are understandable.  The take-home concepts are:

  • Neurons are responsible for “processing” in your brain, and they use electrical and chemical signals to communicate with each other
  • Many drugs that affect your psychology target the ability of neurotransmitters to “continue the signal” from neuron to neuron
  • Some drugs affect more than one aspect of neurotransmission, and in more than one location

“Print” Lives?

I’ve had magazine subscriptions of various types for years now, beginning with Boy’s Life (the Boy Scout magazine…) and various computer game mags, and then eventually to Popular Science and Consumer Reports.  However, in recent years, there have been a number of news stories discussing “The Death of Print Media,” including magazines and newspapers, primarily.  This is mostly due to the Internet and its ability to get you the same information much, much faster than a weekly or monthly periodical can, and cheaper as well.

Recently, however, certain magazines have begun to toy with digital versions of their material.  These are magazines that have either dropped in subscribers to a substantial degree, or have already folded for a variety of reasons.  For example, while TIME Magazine is apparently weathering the storm, Newsweek just got hammered by a drop in subscribers to the point where they were looking for a buyer.  Gourmet Magazine shipped its final issue at the end of 2009.  On the gaming side, Electronic Gaming Monthly was shuttered at the beginning of 2009.

Some magazines have gotten around this problem by increasing the quality of their material.  Edge Magazine, a gaming periodical in Europe, has proven to be successful by starting to use thicker, glossy paper, raising the perceived value of their product over their competitors.  The magazine just looks good sitting on your table, with its larger paper and glossy images.  It’s the kind of thing you want to keep on your coffee table, as opposed to other magazines that are constantly including more and more ads and thinner, newspaper-like print.  They also limit the number of individual magazines they produce, only making enough to send to subscribers (all over the world…) and keep a limited number on news stands.  This helps keep their costs down, rather than making more magazines than the public will buy.

Alternatively, some of the aforementioned publications are going digital…and in a big way.  The advent of the iPad has allowed Newsweek and Sports Illustrated (amongst others) to get weekly content to readers on-the-go very cheaply, effectively replicating web-based content in a magazine-oriented format.  You can turn the pages as you would with a book, but now making a touch-based gesture on your iPad screen.  The images are very colorful, print easy-to-read, and perhaps most important of all, they can now include hyperlinks and video content that you can’t with a regular magazine.  Recently, it was also  announced that Gourmet Magazine was relaunching as Gourmet Live, also releasing on iPad (announcement video below).

Similarly, Electronic Gaming Monthly was bought out by the guy that started the magazine in the first place back in the 90s and relaunched in both print and digital formats.  For a demo, click this link and it will take you to a freely available copy of the magazine (pictured above) so you can see what it looks and feels like (and you should “Experience in Full Screen”).  While you may not be interested in video games in the least, at least you’ll get an idea of what is possible through digital distribution of magazines.  EGM also has an iPad version, but this particular example is representative of what you can experience in any web browser.

So, is “print dead?”  Probably not, but it’s definitely evolving.  Everything I’ve heard suggests that print journalism majors are finding it difficult to get jobs once they graduate from college, as many newspapers and magazines are scaling back, if not shutting their doors.  The primary hurdle appears to be advertising, as very, very few companies have been able to make it with their large-scale operations solely on the advertising revenues of web-based content.  The New York Times tried unsuccessfully to require subscriptions on portions of their website years ago (and they’re trying again in 2011), but our culture tends to shun pay-for content on the internet, at least with regards to news.  There are just so many blogs available, or other free sites, that get you the same information for no money at all.

Personally, I’m on board with a model like Edge or EGM is using, one where they produce magazines in limited quantity for the people that want it, but otherwise provide digital versions for those that don’t care either way.  Honestly, I still read everything on blogs and only go to the “primary source” sites when linked there.  I like the way EGM has set up their content, but I think I’d rather have an iPad or some other similar device for that purpose, rather than use my heavier and more unwieldy laptop (imagine sitting in bed and reading…would you rather hold your laptop or your iPad?).

I think a lot of people value the content they get from magazines and newspapers, as the journalists that write them get access to news and information they otherwise can’t.  Bloggers generally don’t have correspondents in Afghanistan, so they rely on organizations like NPR and the Associated Press to gather the news, and bloggers just put their own spin on it and spread it as well.  We still need primary news sources to survive this transition from “old media” to “new media!”

Primer: Mass Spectrometry

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

My postdoctoral fellowship here at the University of Iowa still involves research on the mechanisms by which Parkinson’s disease progresses, much like my research at Saint Louis University, but I’m employing different techniques.  In an effort to explain those techniques, I’m going to try outlining them here, as it’s a technique that’s “tossed around” on shows like “CSI:” on an almost weekly basis.

Mass Spectrometry is a technology developed over 100 years ago and has been employed by researchers for much of that time.  The high cost of procuring one of these instruments (easily in the $10,000s, if not approaching $100,000+) makes them somewhat difficult to find in the undergraduate setting, and sometimes difficult to find in graduate schools.  Larger institutions, such as the University of Iowa, will have a few of them, but more than likely, you’ll have to share the instrument with quite a few others, not-so-patiently waiting their turn.

The instrument I’m using is called an LCMS-IT-TOF, pictured above.  The acronym stands for “liquid chromatography mass spectrometer – ion trap – time of flight.”  Each section of the acronym represents a distinct component of the mass spectrometer: there are different components that can be inserted to achieve similar analytical results in a different fashion.  Some components are better for some types of analyses, while other components are better for others.

But, in keeping this relatively simple, I won’t go into it each part.  Feel free to check out the Wikipedia article on the subject if you really want to know more about it, but basically, a mass spectrometer is divided into three primary components:

  • A source
  • A mass analyzer
  • A detector

The “source” effectively destroys whatever you’re wanting to look at.  There are a variety of different sources one can have in their configuration (e.g. MALDI, ESI, ACPI, etc.) In my case, let’s say you have a protein you want to investigate.  The mass analyzer can look at it, but the nature of the type of data it provides makes it much easier to break the protein up into smaller bits first.  Therefore, the source breaks up your relatively large molecule of interest (such as the protein in our example) into smaller, more manageable pieces.  As with many other things, taking things in “baby steps” is much easier to deal with.

The “mass analyzer” is necessary to help with sorting of all those small, manageable pieces.  Think of this process like a box of cereal (I know, right?). Specifically, Frosted Mini-Wheats.  When you open the box, you’ll notice that there are mostly fully-formed Mini-Wheats at the top of the box.  As you continue on toward the bottom, you’ll start seeing some smaller pieces, some that may have split in half, for example.  And at the bottom of the box, you’ll see all the individual wheat fibers and sugar frosting.  The same premise holds for a mass analyzer.  All those pieces of protein broken up by the source are in different sizes, and the mass analyzer helps sort them out in such a way that the small pieces, medium pieces, and the large pieces are all separated.  As with the source, there are many different types of mass analyzers (e.g. TOF, IT, Quadrupole, etc.) used to carry out this work, depending on what you’re looking at.

The “detector” is the piece that really gives us the information we want.  After those bits of sample are sorted, they each hit the detector one at a time and the detector tells us what the mass is, typically by actually reading the electrical charge of the sample.  Typically, the source (sometimes referred to as an “ionization source”) introduces a charge to each piece of the sample, allowing for the detector to…um…detect them.  🙂

So, how is my work fitting into this?  Our lab is interested in how a particular molecule, 3,4-dihydroxyphenylaldehyde (DOPAL) may be involved in Parkinson’s disease.  DOPAL is a metabolite of dopamine, the neurotransmitter necessary in order for you to make voluntary movements.  When you run out of dopamine (or the cells that produce it, in the region of the brain where you need it), you get Parkinson’s disease.  Dopamine is present in those cells, which therefore means DOPAL is present, too.  DOPAL is an aldehyde, which means, on a chemical level, it can bind with other molecules relatively easily.  What we want to know is whether DOPAL may bind to proteins within those cells.  This may matter because cells tend to function in certain ways, and if their individual parts (e.g. DNA, organelles, proteins, etc.) get modified somehow, they won’t work properly and, subsequently, the cell will kill itself to prevent further damage to surrounding cells and tissues.

We want to see whether DOPAL binds to any proteins.  If we can find proteins that DOPAL binds to, and if we know what those proteins do inside a cell, then we may be able to a). protect them against DOPAL’s binding, or b). develop drug targets toward those proteins to help prevent them from causing death of the cell.

How does mass spectrometry fit into this equation?  Back to our early example of a protein being introduced into a mass spectrometer.  The instrument will tell us how much a protein weighs on a molecular level.  We also know how much a single molecule of DOPAL weighs.  We can, thus, use a mass spectrometer to see whether the mass of a protein increases when DOPAL is present.  If that occurs, we can show that DOPAL has bound to the protein.  We can also get information as to where on the protein DOPAL bound, or how much DOPAL bound to the protein, and so on.

In the image above (upper left), you can see some vertical lines we refer to as “peaks.”  Each peak represents a single mass of a given protein or molecule.  You can then take that peak and “fragment” it into smaller peaks.  You can do this multiple times (e.g. MS, MS2, MS3 and so on…).  Fragmentation patterns give you an idea as to what makes up a complex molecule.  For example, if you went from MS to MS2 and had a loss of 18, you could say that you lost a water molecule during fragmentation (O=16, H=1…H2O=18).  In the case of DOPAL, we would see an increase in mass (and a shift of the peak) of 151, depending on how DOPAL bound to our protein of interest.

So, basically, that’s what I’m doing in the lab.  There’s quite a bit more to the story than this, but I think I’ve simplified the concepts to a mostly understandable level.

Probably not, though.  🙂

Primer: Pharmacology

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

Whenever my parents had to try and explain what I was getting my Ph.D. in to their friends or my extended family, the common response would be: “he’s going to be a Pharmacist?” Whenever I’d be asked the question, I’d typically respond with a “sigh” and then continue to say: “The difference is that a Pharmacist sells drugs, and a Pharmacologist makes drugs.” Of course, that’s a simplified definition, but was typically good enough for my purposes.

In actuality, that isn’t completely accurate.  The Dictionary.com definition reads as follows

pharmacology   -noun

the science dealing with the preparation, uses, and especially the effects of drugs.

The Wikipedia article on Pharmacology is also pretty useful, and goes into much greater depth than I prefer to here.  To summarize more broadly, Pharmacology is the study of how drugs work in an organism.  This definition encompasses how a drug gets produced, how it gets into your body, where it goes once it’s in your body, what effect it has once it reaches its destination, and how it ultimately gets out of your body.

According to Goodman & Gilman’s The Pharmacological Basis of Therapeutics (11 ed), the study of Pharmacology can be subdivided into a few different categories, both dependent upon one another.

When a drug enters the body, the body begins immediately to work on the drug: absorption, distribution, metabolism (biotransformation), and elimination. These are the processes of pharmacokinetics. The drug also acts on the body, an interaction to which the concept of a drug receptor is key, since the receptor is responsible for the selectivity of drug action and for the quantitative relationship between drug and effect. The mechanisms of drug action are the processes of pharmacodynamics. The time course of therapeutic drug action in the body can be understood in terms of pharmacokinetics and pharmacodynamics.

So, the study of pharmacokinetics looks at how a drug moves through your body (“pharma” for drug; “kinetic” for movement).  It is important to understand these principles when developing or prescribing a drug.  For example, in the case of sleeping medication, you want the drug to act rapidly in your body so that you fall asleep, however you also want the drug’s effects to last for enough time to keep you asleep…but wear off in time for you to get up the next day.  The study of a drug’s pharmacokinetic properties will help develop treatment regimens that those other doctors (read: M.D.s) can use to prescribe medications accordingly, for whatever the situation calls for.

Pharmacodynamics, on the other hand, looks at how a drug works once it reaches its destination in the body.  Some drugs work primarily in the brain, some in the heart, some in the lungs, and so on.  Many drugs have their function by binding to a receptor on the outside of a cell (example: diazepam [Valium]), perhaps a receptor that is responsible for “exciting” the cell or “depressing” the cell (i.e. increasing a cell’s function or decreasing a cell’s function).  Perhaps Drug A will bind more effectively to that receptor, giving you a more efficient response.  However, maybe Drug B isn’t quite as efficient in eliciting a response.  Along that paradigm, while Drug A may be more efficient, perhaps the desired function by you and your doctor is a more delayed, longer lasting effect, and Drug B could fit that bill (typically, you want anti-anxiety medications to last throughout the day, for example…not just for a few hours).

Knowing principles of pharmacokinetics can help you maximize how much drug gets to the site of action.  Knowing principles of pharmacodynamics can help maximize how much of an effect the drug has once it’s there.  Both of these concepts are essential to effective drug design and usage.

As a brief (yet related) aside, I first became interested in the subject when taking a class on Psychopharmacology in the Psychology department at Truman State.  It was very interesting to learn about how different drugs affect your brain to result in different effects.  For example, a drug like diazepam (Valium) is a drug that’s intended to function as an anxiolytic and sedative.  The basis of its function, however, is that it works on specific receptors that effectively “depress” neurons, limiting their firing ability.  It turns out that function is also quite useful to help prevent seizures, a disorder where neurons fire more often than they should.  So, some drugs that are intended for one purpose can be useful in another, but you need some understanding of how that drug works before you can begin to apply it to another situation.

So, in short, pharmacology refers to the study of how drugs work and, therefore, a pharmacologist works on such things.  I should point out that pharmacists do play an important role in the development of drugs, as well.  Merck and Pfizer employ both Pharmacologists (Ph.D.) and Pharmacists (Pharm.D.), amongst a wide variety of others.

But, they’re quite different in their training.

The Next Big Thing

The Electronic Entertainment Expo (E3) was held last week in Los Angeles. It is always interesting for me to watch the coverage in the gaming media during that week, looking at live blogs about the different press conferences (Microsoft, Nintendo and Sony, primarily), and gathering everyone’s opinions about the proverbial “future of gaming.” Essentially, E3 is the time where most consumers hear about what games or platforms will be available for the holidays, or shortly thereafter. All the major media outlets tend to cover it in order to tell their viewers what they’ll be buying for themselves or their kids this Christmas.

You may have read in the news about Microsoft’s Kinect, or Sony’s Move. Both of these systems are attempts at cashing in on some of Nintendo’s motion control success that the Wii had. Microsoft focused a bit too much on Kinect, while Sony did a little better job of showing some games that the wider audience would want to play. No pricing has been announced for Kinect, but $150 seems to be the prevailing wisdom, plus the cost of the console. The Move will cost $120 or so to get started, but an additional $60 per person in order to get the “full motion control effect.”

While Microsoft and Sony were duking it out over motion control, Nintendo went a different direction: the Nintendo 3DS. I kinda wanted to post something about it last week, but I wanted to hear more analysis from the weekly podcasts I frequent, as they were able to get some “hands on” experience with it. To quote Jeremy Parish over at 1up:

Then I actually got to use the 3DS, and… wow. It works. It doesn’t strain my eyes at all, yet I can absolutely see the depth. I’m not exaggerating that the realization that my poor eyesight won’t shut me out of the next generation of portable gaming was the single happiest moment I’ve ever had at a gaming industry event.

To get a sense of what the 3DS can do, check out this YouTube video. This video does NOT take place on a 3DS, but demonstrates the kind of visuals and gameplay it should be able to handle when it comes out in 2011.

Nintendo will have a tough time demonstrating the 3D technology in TV commercials as very few TVs actually display 3D images. The tech is rumored to work by having two LCD screens overlapping, where the top one is shifted slightly such that one eye sees the top one and the other eye sees the bottom, allowing for stereoscopic 3D without the need for glasses.

That last bit is why this technology will be the new hottness next year, and why this thing will sell like hotcakes. You don’t need 3D glasses. And it’ll probably sell for close to $200, making it affordable 3D, as opposed to needing a multi-$1000 TV and 3D shutter glasses that sell for a few hundred dollars each (like Sony was demonstrating). This product marks the first time real, working, 3D images will be available to consumers (no, the Virtual Boy doesn’t count).

The Nintendo 3DS even has two cameras on the outside, allowing you to take 3D pictures.

Also, Nintendo was demonstrating some 3D movie trailers on the 3DS as well, suggesting that the device will have the ability to play movies. So, your kids that loved “How To Train A Dragon” or “Shrek 3D”…they’ll be able to watch it in 3D, and you won’t have to spend that much money to make it happen.

So, for the average consumer, the 3DS is a pretty big deal. The Nintendo DS has sold 130 million units, making it the most successful console ever. Parents buy them for their kids for Christmas without even thinking about it. It’s a way to entertain the kids at home and in the car without requiring you to give up your TV. If it sells for less than $200, it will still be a no-brainer. But, the fact that it has true 3D without the need for glasses will get the average consumer that doesn’t have kids to sit up and take notice.

I’ll be first in line when it releases in Spring 2011 (projected release time frame).