Primer: Drug Discovery

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

There are a few ways to approach the general idea of drug discovery, but I’m going to try and tackle it from the historical treatment first, and maybe revisit it in a future Primer.  I am part of the Division of Medicinal and Natural Products Chemistry at the University of Iowa, and the two components of it, Medicinal Chemistry, and Natural Products, are both integral to the idea of developing new drugs.  Medicinal Chemistry is just as it sounds: the study of designing and synthesizing new drugs, using principles of chemistry, pharmacology and biology.  The idea of Natural Products, however, is a bit more interesting in that, just as it sounds, it studies chemical compounds “developed” in other organisms that may be useful as drugs.

The oldest records tend to cite the ancient Chinese, the Hindus and the Mayans as cultures that employed various products as medicinal agents.  Emperor Shen Nung, in 2735 BC, compiled what could be considered as the first pharmacopeia, including antimalarial drug ch’ang shang, and also ma huang, from which ephedrine was isolated.  Ipecacuanha root was used in Brazil for treatment of dysentery and diarrhea, as it contained emetine.  South American Indians chewed coca leaves (containing cocaine) and used mushrooms (containing tryptamine) as hallucinagens.  Many different examples of drug use in ancient, and more modern cultures, can be pointed to as early forerunners of today’s drug industry.

However, it was the 19th and 20th centuries that really kick-started the trend, as this is when modern chemical and biological techniques started to take hold.  It was in the 19th century when pharmacognosy, the science that deals with medicinal products of plant, animal, or mineral origin, was replaced by physiological chemistry.  Because of this shift, products like morphine, emetine, quinine, caffeine and colchicine were all isolated from the plants that produced them, allowing for much purer, and more effective, products to be produced.  Advances in organic chemistry at the time really helped with the isolation, so these discoveries wouldn’t have been possible previously.

In today’s world, there are a few ways you can go and discover a new drug:

  1. Random screening of plant compounds
  2. Selection of groups of organisms by Family or Genus (i.e. if you know one plant that makes a compound, look for more compounds in a related plant)
  3. Chemotaxonomic approach investigating secondary metabolites (i.e. Drug A functions in your body, then is metabolized in your liver to Drug B, which also happens to be functional)
  4. Collection of species selected by databases
  5. Selection by an ethnomedical approach

I think the latter two are the most interesting, especially with a historic perspective.  With the latter, we’re talking about going into cultures (a la the movie “Medicine Man“) and learning about the plants that they use to cure certain ailments, then getting samples of those plants and figuring out what makes them effective.  It has been estimated that of 122 drugs of this type used worldwide from 94 different species, 72% can be traced back to ethnic groups that used them for generations.

The discovery of new drugs of this type is actually somewhat worrisome as these cultures die out or become integrated into what we’d consider “modern society.”  These old “medicine men” and “shamans” die before imparting their knowledge to a new generation and these kinds of treatments are lost.

The collection of species and formation of databases is interesting, and only more useful in recent history due to the advent of computers that can actually store and access all the information.  With this process, we’re talking about going into a rain forest, for example, and collecting every plant and insect species you can find, then running various genetic and proteomic screens on the cells of each plant and insect to see whether they produce anything interesting or respond to anything.  This process can involve thousands of species across a single square mile in a rain forest, necessitating a great deal of storage space for the samples themselves, but also computing power to allow other researchers the ability to search for information on that given species.

An example of a “screen” that one could carry out would be to grow bacteria around your plant or insect samples.  If you ever heard the story of penicillin, you’ll know that Alexander Fleming (1928) noticed that his culture of Staphlococcus bacteria stopped growing around some bread mold that had found its way into the culture.  From that bread mold, penicillin, was developed as our first antibiotic.  The same kind of principle can be applied here: mix your samples together and “see what happens.”  If anything interesting happens, you then continue investigating that sample until you isolate the compound that is doing that interesting thing.

The isolation of that “interesting compound” can be very tricky, however.  In many cases, a particular anticancer agent or antibacterial agent may be housed inside the cells of our plant species.  Getting that compound out may be difficult, as it could be associated with the plant so tightly that you have to employ a variety of separation techniques.  And even after you apply those techniques, what you are left with may be nonfunctional, as the compound may require the action of that plant itself to work properly (i.e. the compound you want may still need other components to work).  Even after you isolate the compound you want, in order to make it a viable drug, you have to be able to synthesize it, or something like it, chemically in a lab setting.  Preferably, on a massive scale so you can sell it relatively cheaply as a drug to the masses.  These processes can be daunting and costly.

So basically, it can be fascinating to discover new drugs, especially ones that were actually “discovered” thousands of years ago by cultures that have long since died out.  However, you may find that “discovering” the drug may be the easy part – mass producing the drug could be the most challenging aspect of the ordeal.

Primer: Scientific Funding

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

One would like to think that major universities spend their money on research for their various faculty members, but unfortunately for me, that typically isn’t the case.  Sure, there is a reasonable amount of money going to fund the research carried out by faculty members in biology, physics, and chemistry departments, but the reality is that in order for that research to occur, and moreover almost all of the important discoveries under the umbrella we call “Science,” money must come from sources other than the university.  In many cases, your tenure and rank at your given institution is determined by how much outside funding you bring in and where it comes from.

The majority of scientific funding in the United States comes from the Federal Government, mostly in the form of the National Institutes of Health (NIH) and, to a lesser degree, the National Science Foundation (NSF) and Department of Energy (DoE).  Scientific American did a great job recently summing up how much money goes into which pot at the Federal level with an easy-to-read graphic that I suggest you glance at.  Basically, the NIH gets $28.5 billion to divide amongst its various projects, including grants that professors and other individuals apply for.  The NSF gets $4.2 billion, and the DoE gets about $3.5 billion to devote to research.  For comparison’s sake, the Department of Defense gets $56.2 billion (excluding special funding in war-time).

Obviously, NIH is getting a substantial piece of that pie.  For the most part, if you are doing biomedical research like I am, then the NIH is the first place you apply to.  They will generally fund anything that you can tie to a disease or disorder.  Alternatively, NSF won’t touch any grant that even implies it could help with disease research, instead focusing on really basic research.  Chemists and Physicists can find applications in the NIH, but usually NSF and DoE (or others) are where they have to look for funding.  And that pot is much smaller than the NIH pot.

The process of applying in each agency varies, but for the most part, you go about it the following way:

  1. Find a grant application that applies to your research
  2. Write the application according to their explicit instructions
  3. Submit the grant by a given due date (usually a few times per year)
  4. The grant is assigned to a division of the agency and then further assigned to a committee
  5. The committee is made up of people who should know what they’re doing, and then rank each grant they get in a pile based on its merits, need, and contribution to science
  6. The committee is given a number of grants that they can fund (usually between 5-20% of total grants submitted)
  7. Funding is decided and you are notified of the decision

There are usually three decisions that can be made.  Either a). the funding agency can grant you the money and accept your project as-is; b). the agency can give your grant a rank or score and suggest you make some changes and resubmit it; or c). they can “triage” your grant, basically saying they didn’t even score it, and that it needs significant work to make the cut.  The committee in question will usually give you some kind of pointers as to why your grant was or wasn’t funded, but that experience will vary across agencies and committees.

The NIH has a few different grant series that you can apply for.  Some, like the one I applied for in early December, are considered “training grants.”  So in this case, the grant I applied for was a post-doctoral training grant (designated “F32”) that would pay my salary for 2-3 years, based on the project I outlined to them.  No equipment or anything would be paid for – just my subsistence.  Alternatively, the “Big Daddy” grant to get is designated “R01,” which is a big league research grant that awards up to $5 million to a researcher and their lab, paying for salaries, equipment, and even some travel money to conferences.  At many big academic institutions, you need to get an R01 before you can achieve tenure.  At some of them, you need two.  The going funding rate for these grants has been in the 8-10% range, which is pretty low.  It’s tough to get an R01 and you can spend a lot of your time writing these grants and trying to get them, rather than actually doing research.

There are alternatives to federal money, of course.  You could call these Private, or “Foundation Grants.”  These entities are frequently not-for-profit groups that are set up to fund research according to their specifications.  The Michael J. Fox Foundation for Parkinson’s Research is one you may have heard of.  The American Heart Association is another.  The grants these foundations fund are typically quite a bit smaller than those funded by the government, rarely reaching in the millions of dollars.  They are also quite competitive, and some could argue more competitive than federal funding.  Generally, you end up spreading yourself thinner across multiple foundation grants if that’s how you have to fund your lab, or you get a single federal grant (or two…).  It all depends on how large your operation is, how many people are under you, and how many projects you have running at a given time.

I’ll leave you with one last point about the funding of science (insert soap box here): the majority of scientific innovations and true breakthroughs come from the funding agencies listed above:  NIH, NSF and DoE.  Private Industry, such as Pfizer or Merck, carry out their own research and development programs, but they rely heavily on basic research carried out in academic settings.  They do this partially because these companies cannot patent what is published in a journal article by someone else, so they have to take other research, apply it to their own needs, and then create a patent that they can make money off of.  When federal funding for science drops or doesn’t even increase with inflation, that means that professors make less money and cannot afford to pay their workers.  That means that less basic research is done.  That means that Private Industry has to devote more money to R&D in order to make new discoveries.  That increases the amount of money they need to put into developing a drug (more on that in a future Primer…).  Finally, that means the drugs and treatments that then go to you cost more money, adding to the sky-rocketing health care costs we already have, mostly because that basic research that Private Industry did is now covered under a patent for 10 years and no one else can make money on it and compete.

Funding of science at the federal level is incredibly important.  It’s hard enough as it is to get a grant, and it is vitally important that the money NIH, NSF, DoE, etc. get does not decrease, but instead increases.  That’s where scientific innovation comes from in the United States.  It’s why people from all over the world come here to get a Ph.D. and do research.  Because the United States values innovation and discovery.

As well they should.

Primer: Drug Metabolism

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

I chose to work on this subject for December because I may end up teaching a lecture or two on metabolism in early February to pharmacy students.  Obviously I’ll go more in-depth with them, but that isn’t the purpose of these Primers: they are intended as introductions.

Merriam-Webster defines “metabolism” as such:

Metabolism –noun

a.  …the chemical changes in living cells by which energy is provided for vital processes and activities and new material is assimilated

b. the sum of the processes by which a particular substance is handled in the living body

This definition is all well and good, but we’re talking about a specific form of “metabolism” here, one that really is talking about the breakdown of a chemical compound not necessarily for the purpose of generating energy.

Wikipedia provides us with a separate definition for drug metabolism:

Drug metabolism is the biochemical modification of pharmaceutical substances by living organisms, usually through specialized enzymatic systems.

So when we’re talking about an individual, such as an athlete, that has a “strong metabolism,” we’re talking about related but separate processes from the ones typically involved in modification and removal of drugs from your system.

In general, drug metabolism consists of two separate processes known as Phases.  In Phase I metabolism, a given compound is broken down and typically inactivated (but not always, as we’ll see shortly).  It usually involves a specialized protein called an enzyme that removes a specific portion of the compound, rendering it pharmacologically inactive.  Phase II metabolism typically involves the addition of another molecule onto the drug in question, something we call a “conjugation reaction.”  This process serves to also increase the polarity of a given drug.  Usually, we think that Phase I reactions precede Phase II reactions, but not always.

When I say “polar,” I mean it in a sense similar to a planet, in that a planet has “poles” (e.g. north and south).  For the sake of simplification, you can also think of a magnet or a battery instead, with a “positive” pole and a “negative” pole.  In this fashion, chemicals also have a positive and negative charge, including chemicals like water:

In this case, the oxygen atom in water (i.e. H2O) is negative while the two hydrogen atoms are positive.  Therefore, water is polar: it has an end that is more positive and an end that is more negative.  Polar compounds are also considered “hydrophilic” (i.e. “water-loving”), mostly because these polar chemicals tend to dissolve readily in water.

There are examples of “hydrophobic” (i.e. water-fearing) chemicals as well, also known as non-polar.  You know how oil and water don’t mix?  That’s because oils like fats or lipids are hydrophobic and non-polar, made up of molecules that look kinda like these.

These are all examples of hydrophobic (non-polar) compounds, those that do not mix well with hydrophilic (polar) molecules like water.

The key to drug metabolism is to realize that most of your cells, and thus organs, are made up of lipids such as these, so if you have a drug that is particularly “lipophilic” (and thus, hydrophobic), then the drug is more likely to hang around in your body.  That is to say, a drug that is non-polar can hang around longer, affecting you for longer than you may otherwise want.  If you use a more polar drug (i.e. hydrophilic), it’s more likely to get passed out of your body much faster.  Much of your body’s ability to expel chemicals and metabolites depends on the ability of your kidney and liver to get those chemicals and metabolites into a form that works well with water, as water is what you typically get rid of (i.e. urine).

When your body recognizes a foreign compound, such as a drug, it wants to make that drug more polar so it can excrete it.  Thus, your liver contains a number of enzymes that do their best to make those foreign compounds more polar so you can get rid of it.

This process, obviously, impacts the ability of a drug to take action, which is why this process is important.  There’s a reason why drugs are introduced to your body orally (i.e. through the stomach/intestines), or intramuscularly, or intravenously.  If you were to take a drug orally, then it is subjected to what is termed as First-Pass Metabolism.  Typically, when you eat something, the nutrients from whatever you ate are taken up through the portal system and hit your liver before they hit your heart, which only then go on to the rest of your body.  Therefore, if you take Tylenol for a headache in a pill form, it some of it will be broken down in the liver before the heart gets it, and then it gets pumped to your brain to help with your headache.

Alternatively, you could take Tylenol intravenously, which bypasses the liver and thus gives you a full dose.  However, Tylenol is toxic in high doses, so you would never want to inject much of it (or any of it…there are better choices if that’s what you’re considering….) for fear that it could kill you.

The final concept to consider, aside from drug modification, polarity and first-pass metabolism, is how we could use this system to our advantage.  There are times when you take a drug, such as a benzodiazepine like valium (diazepam).  Valium, on its own, is very useful as a depressant, used to treat things from mania to seizures, however the act of drug metabolism produces metabolites that are also active (called, not surprisingly, active metabolites).  In the case of valium, it is broken down in the liver to nordiazepam, then temazepam and finally oxazepam.  Each one of these metabolites is active to some extent, which means that a single dose of valium will last for quite awhile as it’s broken down into other compounds that still affect you.

Sometimes, you can administer a non-active drug that then becomes active once it’s modified in your liver.  We call this a prodrug.  Codeine, for example, is modified by Phase I metabolism to its active form, morphine.  You typically administer morphine to someone intravenously, as it’s rapidly metabolized in the liver.  Codeine allows you to take advantage of your liver to give you morphine in a pill form, which you otherwise wouldn’t be able to do (as it would be broken down too far before it even hit your heart).

In short, drug metabolism is an extremely important process to consider when designing a drug.  You need to take ease of use and route of administration into account, you need to consider whether a drug has active metabolites or not, and you need to be aware of how hydrophilic/hydrophobic a drug is if you want it to remain in your body for any reasonable amount of time.

Primer: Structure of the Brain

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

I can’t say I’ve been excited about writing this one, as brain anatomy is, quite possibly, the most boring thing I can think of to write about.  I did a rotation at SLU in a lab that focuses on anatomy and how individual brain structures interact with one another, and that 6 week period was more than enough for me.  As that professor told me, it’s very important work that someone needs to do, even if it may not seem all that interesting.  This kind of work is how researchers have figured out which brain component “talks” to which other one(s), and how intertwined all these connections really are throughout the brain.

For the sake of this posting, I’ll simply point out that brain mapping has been carried out in a variety of ways.  Quite a bit of it has been done over decades when people would hit their heads.  If they would lose their memory, or their sense of smell, clinicians could localize the injury to a specific area of the head, then look at the brain post-mortem and see what happened.  Ultimately, they would find a lesion of dead tissue in that region that lead to the deficiency.  Similarly, the study of stroke victims also provided clues to the function of certain brain locations, as a stroke occurs when blood flow is cut off to an area of the brain, typically leading to damage.  Alternatively, modern science uses a series of stereotactic injections of traceable materials in mice, rats and primates that can be visualized in brain slices, showing that a series of neurons in one area are connected with neurons in a separate region of the brain.

It is through this work that certain pathways were elucidated, including the reward pathway (very important for drug addiction, gambling addiction, etc.); the movement pathway (mostly for Parkinson’s disease, but important for voluntary movement, in general); the sensory systems (how the visual cortex signals, the auditory cortex, etc.); the amygdala (figuring out what this structure did and where it went led to quite a few labotomies back in the day); and memory (signals transfered between the hippocampus, the reward system, and the cortex…very complicated network…).  It is through brain mapping like this that helped determine where everything connects together, and which areas are important.

While the human brain is a difficult nut to crack, it can be divided up into different portions.  For the sake of this little blurb, we’ll focus on the three primary divisions of the brain: the prosencephalon (forebrain), the mesencephalon (midbrain) and the rhombencephalon (hindbrain).

The prosencephalon, or forebrain, is further divided into the telencephalon and the diencephalon.  The telencephalon consists, primarily, of the cerebrum, which includes the cerebral cortex (voluntary action and sensory systems), the limbic system (emotion) and the basal ganglia (movement).  As you can see from that list, for the most part, the telencephalon is what constitutes what “you” are: your thoughts, your feelings, and your interaction with the world around you.  It’s where a lot of your processing happens.  The telencephalon in humans is quite a bit more developed than in other species, which is really what separates their brain from other, lesser developed species (i.e. the human telencephalon is what really separates them from a chimpanzee).  The diencephalon, on the other hand, consists of the thalamus, hypothalamus and a few other structures.  The thalamus and hypothalamus are very important for various regulatory functions, including interpretation of sensory inputs, regulation of sleep, and release of hormones to control eating, drinking, and body temperature.

The mesencephalon is comprised of the tectum and the cerebral peduncle.  The tectum is important for auditory and visual reflexes and tends to be more important in non-vertebrates, as they don’t have the developed cerebral cortex that humans do (more on that later).  The cerebral peduncle, on the other hand, is a mixed bag of “everything in the midbrain except the tectum.”  It includes the substantia nigra, which ties into the movement system and reward system.  I think it’s fair to say that, aside from these things, the function of the midbrain, overall, has yet to be fully determined.

The rhombencephalon is quite important, even though it’s probably the oldest part of the brain, from an evolutionary standpoint.  It includes the myelencephalon (medulla oblongata) and the metencephalon (pons and cerebellum).  The medulla oblongata is important for autonomic functions like breathing and heart function.  The pons acts primarily as a relay with functions that tie into breathing, heart rate/blood pressure, vomiting, eye movement, taste, bladder control and more.  Finally, the cerebellum is important for a feeling of “equilibrium,” allowing for coordination of movement and action, timing and precision.

As you may have noticed, if you go from back-to-front, you’ll get increasing complexity in brain function.  For example, the hindbrain is important for very basic things like breathing, heart rate, and coordinated movement.  These are functions that are important in nearly all organisms, but especially so all the way down to the smallest worm and insect.  Further up, the mesencephalon starts to work in further control of reward and initiation of voluntary movement, giving the organism voluntary control rather than solely reflexive control.  Then, the diencephalon starts acting like a primitive brain, working in regulatory functions and more complicated reflex action to help maintain the more complex organism that has been assembled.  And finally, the telencephalon yields the ultimate control over the organism, with things like memory, emotion, and greater interpretation of sensory inputs.  As the image above dictates, the hindbrain (to the right-hand side) remains a large portion of the brain in the rat and the cat, but the human forebrain (the top/left-most portion) gets much larger, relative to the hindbrain.  With that size comes greater development of brain structure and function.

So yeah, the brain is kinda complicated.  Actually, it’s really complicated and, for the most part, I do my best to ignore all of the complex wiring networks that occur within.  However, it is important work that needs to be done in order for surgeons to do what they do, and for neuropharmacologists to develop drugs that target some brain areas and not others.  For the most part, I’ll leave this research to more interested people…

The Science of Speaking Out

Ira Flatow had a group of climate scientists on his show, NPR’s Science Friday, this past week discussing the “fine line” that many scientists find themselves walking.  Philosophically, there are many in the scientific community that believe they should present the facts and allow the public to interpret them.  These scientists frequently just want to stay out of that realm of discourse, allowing the public (and, therefore, politicians) to decide how their data is used and what the best course of action is.  Largely, this is how it’s always been.  Early astronomers could tell what they knew, but had to wait for their ideas to be accepted by their respective communities.

This particular group of climate scientists, however, is getting together to move beyond the borders they have typically held themselves to, instead choosing to speak out with what they know and actually make policy recommendations based on their information.  Largely, this group adheres to what the great Carl Sagan once said:

“People are entitled to their own opinions but not their own facts.”

That is to say, these scientists are tired of presenting facts time and time again only to have them ignored and have other people’s opinions matter more than proven factual data.  To the scientific community, there is no question regarding the fact that global warming is occurring and that humans contribute to it.  In a separate (but related) issue, to the scientific community, there is no question regarding the fact that evolution is occurring and that natural selection is the most likely mechanism.  There is no question that frozen embryos are kept in that state for years and end up “dying” in a liquid nitrogen freezer when they could have been used for stem cell research rather than being discarded in a biohazard bag and incinerated.  Yet politicians, for some reason, are able to ignore these facts in their decisions of what is taught in our schools and what energy policies should be enacted and how important research could be conducted.

After listening for awhile, an individual called in and asked a question that intrigued me, and it’s one that I haven’t really considered up until now: why is it that members of Congress, and politicians in general, feel the need to question facts of science, yet do not pose the same questions toward religious beliefs?  Let us assume that all politicians turn their magnifying glass toward all information that comes across their desks (hah!).  Shouldn’t that magnifying glass analyze all information the same way, equally?  Shouldn’t they ask, “Well, this group of people used rigorous experimental techniques and verified their findings, and this other group didn’t.  Which should we believe?”

I mentioned this concept to Brooke and her attitude was, generally speaking, “That’s Just How It Is.”  This is true, but it still irks me.  I realize that this is how religious beliefs have always been.  There has always been a large enough group of individuals that are so adamant about their beliefs that, no matter what facts you give them, they will not shift policy to match.  The most recent issue of childhood vaccinations and the misconceptions about them comes to mind.  I’m not sure if this is a failing of critical thinking skills or education in general, but it’s been such a pervasive problem throughout history that I have to wonder.  Frequently, it takes at least one generation to change minds about these things, and in some cases, many generations.  I’m just afraid that, on many of these issues, we don’t have that long.

Case in point: the Catholic Church condemned Galileo‘s heretical thinking about the Earth revolving around the Sun as “vehement suspicion of heresy.”  He died in 1642 and he couldn’t be buried with his family because of it (to be fair, the Church moved his remains to their rightful place almost 100 years later).  However, the Catholic Church waited over a century before accepting heliocentrism, and until 1965 to revoke its condemnation of Galileo himself.

Scientists are getting a little annoyed with that kind of treatment.  Granted, the world moves faster today and ideas are disseminated and accepted much faster, yet Natural Selection has been a concept for over 150 years and there are still people that use the phrase “but it’s just a Theory.”  It shouldn’t take over 150 years, let alone 300 years, for ideas to be accepted when those ideas are revolutionary to our understanding of our place in the universe, and it really shouldn’t take that long for governments to make policies that use legitimate scientific data to actually preserve our place in that universe by preventing our extinction from it.  In 300 years, without any change in policy, we won’t have California or Florida anymore.  It will be too late.

Primer: Memory

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

The whole idea of “memory” has intrigued me for quite awhile, arguably before I was even that interested in science in general.  Part of this is my attraction to all things computers.  I think I build my first computer (rather, helped Dad build one…) back in the late-90s, and at that time, I began to understand all of the components that make it function.  The idea of “input/output,” the function of a central processing unit (CPU), RAM and hard drives…all of these things proved relatively easy to grasp, and in light of these general functions, it made my understanding of the brain a bit easier in the process.

Let’s think of it this way.  You interact with computers in different ways, but one way is with a keyboard.  You type something into the keyboard and the data you input is converted by the CPU into something that can be understood by the system, in this case, binary code (i.e. a series of “1s” and “0s”).  All of your inputs from the keyboard are stored in RAM for faster, short-term access.  If you click the “Save” button on whatever you’re doing, however, the data stored in RAM gets sent to the slower-access hard drive.  As you open programs, information is pulled off the hard drive and into RAM so that your CPU can process it faster, and then you and your keyboard can get at and interact with it.  This is why, in general, having more RAM speeds up your computer because it can pull larger and larger programs into RAM so your CPU can get at it easier, and thus, you can interact with it faster.

In very basic terms, your brain works the same way.  We have inputs in the form of our 5 senses.  The information from those senses gets encoded by your brain’s Cerebral Cortex and is stored temporarily in the Hippocampus (i.e. RAM) before being encoded for long-term storage back in other regions of the Cortex (i.e. hard drive).  Most of the time, your brain “Saves” it’s data to the Cortex at night, which is why sleeping is so very important.  The “processing” portion of this paradigm can be confusing, but keep in mind that the brain is divided up into specific regions.  There’s a “visual cortex,” an “auditory cortex,” etc.  These regions (within the Cortex…) interpret what each sense gives you and then sends that information through the Temporal and Parietal Lobes (also in the Cortex).  From there, the information is spread to the Hippocampus (i.e. RAM) for “integration” before being set as full, long-term memories out in the rest of the brain.

How is that information stored, you may ask?  Again, it’s much like a hard drive.  If you’ve used computers extensively, you know that hard drives are divided up into “sectors” (ever get a disc read error that says “bad sector?”).  When you have a new hard drive, you start with a clean slate.  As you install programs and add files, it gets filled up.  Once you delete something, that sector isn’t really “deleted,” but it is removed from your access: it isn’t really “deleted” until it’s overwritten by something else (which is why you can sometimes retrieve old files off a hard drive that you thought may have been deleted).  Whenever you “defragment” your hard drive, you are basically trying to rearrange those programs to keep everything closer together, and thus, quicker to access.  The data that’s encoded on the hard drive is done in “1s” and “0s” (i.e. binary code).  Each 1 or 0 is considered to be a “bit,” while a set of eight 1s and 0s (e.g. 11010101, 10011010, etc.) is considered a “byte.”  This is where “kilobytes,” “megabytes” and “gigabytes” come from.

The idea of 1s and 0s comes from logic, specifically the definitions of “True” (i.e. 1) and “False” (i.e. 0).  If you have a “1,” then you have a connection.  If you have a “0,” then you don’t.

Bringing this back to neuroscience, the same general rule appears to apply with regards to memories, or the concept of “learning” in general.  In order to form a memory, it needs to be encoded much like your hard drive is: in a series of combinations of connections (or missed connections) between neurons spanning the entire brain.  There are various molecular mechanisms that can account for these connections, or lack of connections, and those go back to receptor theory.  Remember that neurotransmission involves the release of a neurotransmitter (e.g. dopamine, adrenaline, etc.) from one neuron to bind with a receptor on another.  If a neuron stops receiving signals from another neuron, it will remove its receptors from the outside of the cell, thus limiting or negating the signal.  If, however, a neuron keeps getting increased signaling from an adjacent neuron, the subsequent neuron will increase the number of receptors on the outside of the cell, thus making it easier to signal.  Therefore, we have a mechanism for strengthening or weakening the connections between two neurons.

One could consider a “strengthened” neuronal connection to be a “1” and a “weakened” neuronal connection to be a “0.”  It is in this way, it is thought, that memories can be formed on a cell-to-cell basis.

These neurons that memories are stored in are located throughout the brain, similarly to “sectors” on your hard drive.  As you stop using certain memories, the synapses of those neurons weaken to the point where they can be, effectively, “overwritten” in favor of a new memory.  This is also how the idea of “repressed memories” can come about, in that you can have a memory stored in a region of your brain that you have forgotten about, but can re-manifest later: if it isn’t overwritten, it’s still there.

From a molecular standpoint, scientists have a pretty good idea how memory “works,” but being able to decode those memories is a whole different beast.  Returning to our computer metaphor, imagine knowing nothing about computers and finding a hard drive.  What would you do with it?  Would you take it apart?  How would you know what it was?  Or what it contained?  And once you figured out that it, somehow, contained information, how would you read it?  If you eventually found out that it involved 1s and 0s, how would you know how those 1s and 0s were organized across the hard drive, and then finally, what they told you?

This is why it’s highly unlikely that we’ll ever be able to make or see memories like we do in the movies, at least, not for a very long time.  It’s one thing to understand the basis for how it works, but it’s a whole other thing to try and figure out how it’s organized within a system like the human brain.  Also, it’s been estimated that the human brain contains terabytes of information, which translates to 8,000,000,000,000 to 8,000,000,000,000,000 individual 1s and 0s, or individual neuronal connections.

Imagine looking at a sheet (or multiple sheets…) of paper with that many 1s and 0s on it and trying to decide which version of Windows it represents.  Or where your dissertation is…not the Word Document, but the PDF.  That’s what we’re talking about.

So yeah, I just find the concept of memory to be fascinating.  With modern computers, we’re effectively reverse-engineering the human brain and, in doing so, learning more and more about how technological and biological computation can work.  But next time you see some “memory reading” device on TV, bear in mind what’s actually required to make that technology work.

Primer: The Scientific Method

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

There are quite a few things that go flying by in the news that concern me (and I have posted about them here…at…length…), but one that really gets to me is public misunderstanding of Science.  As in, capital “S” Science.  Not really the fact that many people don’t know certain scientific facts, or don’t really understand how many things work, but more that they do not understand how science is done and what it really means.  I will seek to clear up some of that here.

First, however, what does Dictionary.com tell us?

Science – noun

1. a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws: the mathematical sciences.
2. systematic knowledge of the physical or material world gained through observation and experimentation.
3. any of the branches of natural or physical science.
4. systematized knowledge in general.
5. knowledge, as of facts or principles; knowledge gained by systematic study.

Now, this definition seems to center upon the natural/physical sciences, however many, if not all, of the principles that “science” adheres to apply to the social sciences (e.g. sociology, psychology, etc.) and to many other degrees.  However, I will focus on what I know best.

“Systematically” is the word sprinkled about in the definition above, and rightfully so.  “Systematically” refers to how science is conducted, generally through what we refer to as the scientific method.  The Wikipedia article, as usual, is a good start for further information on this particular subject, but basically, here’s how it works:

  1. Formulate a hypothesis
  2. Test the hypothesis through experimentation and observation
  3. Use collected data to confirm or refute the initial hypothesis
  4. Form a new hypothesis based on what was learned in steps 1-3

A “hypothesis,” put simply, is an educated guess toward a question you have.  Many times, especially when you’re first learning the scientific method, you may phrase it in the form of an “If/Then” statement.  For example:

If I drop this rock, then it will fall

The “If” portion of the above statement represents the “Independent Variable,” while the “Then” portion represents the “Dependent Variable.”  Effectively, the Dependent Variable is what you’re measuring and the Independent Variable is what you’re changing in the system.  In this particular case, if you drop the rock, does it fall or not?  You can measure whether or not it falls.  If you don’t drop the rock, does it still fall?  And so on.  It is called the Dependent Variable because it “depends” on what you do in the Independent Variable.

You are generally allowed to have multiple Independent Variables in a given hypothesis (or series of hypotheses), but the Dependent Variable cannot change. What would happen if I dropped a rock on Earth and dropped another one on Mercury?  My results wouldn’t be comparable, because I changed too many things.  I could change the size of the rock, but if I’m measuring the rate at which the rock falls to the ground, I need to make sure the force of gravity is held constant.

Obviously, this is a very simple example.  If one were to ask something a bit more complicated, you could ask the following:

If Tylenol is administered to people with headaches, then they will experience pain relief.

The question above seems simple enough, right?  I could just give Tylenol to a bunch of people with headaches and see if we get an effect.  Then I would know if my hypothesis was correct or if it wasn’t.  But what would happen if I grabbed people prone to migraine headaches were participating in my study?  Or alcoholics (that don’t break down Tylenol all that well)?  The data I would receive would be flawed, as the Tylenol probably wouldn’t do anything to people with migraines and it may actually make alcoholics feel worse.  My hypothesis would be proven wrong.

Here is where we really need to consider “Controls.”  These are a separate series of experiments that you use to compare your experimental results to.  You may choose to set this up in your experiment in a variety of ways, but one possibility is to give those with migraines or the alcoholics (and all other test subjects) a “placebo,” or something that looks like Tylenol, but is actually inert.  Then, you can compare your responses to see if Tylenol had any effect or not.

Above, I mention that after you formulate a hypothesis, you must test it.  You must test it by holding as many things constant as you can while only varying a specific aspect of the experiment, especially an aspect that you can control to some degree.  This brings us to the idea of “testability.”  In order for your experiment to be considered “Scientific,” it must be testable.  If it isn’t “testable,” then it doesn’t satisfy the “systematic” part of the definition.

Over time, enough experiments are done to warrant considering a certain concept to be a “Scientific Theory.”  That is to say, a Theory is an idea that is supported by an array of evidence and co-exists with other known Theories that are equally verified by experimentation.  Assuming a Theory stands the test of time, it eventually is considered to be a “Scientific Law,” meaning it represents something truly fundamental on which the rest of science and knowledge rests.  An example of a Theory is “The Theory of Natural Selection.”  An example of a Law is “Newton’s Laws of Thermodynamics.”  Wikipedia also has a nice list of other Scientific Laws.

Most Laws tend to be Physics/Chemistry-related, as these are the bedrock concepts upon which everything else stands.  You can’t really study Biology without fluid dynamics and quantum mechanics (well, you can ignore them for the most part, but they do get involved in certain situations).  Theories, on the other hand, are much less clear cut.  They tend to represent a constantly evolving field of research, where new data is being applied every day.  I will steal the US National Academy of Sciences definition to explain more fully:

Some scientific explanations are so well established that no new evidence is likely to alter them. The explanation becomes a scientific theory. In everyday language a theory means a hunch or speculation. Not so in science. In science, the word theory refers to a comprehensive explanation of an important feature of nature supported by facts gathered over time. Theories also allow scientists to make predictions about as yet unobserved phenomena.

A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not “guesses” but reliable accounts of the real world. The theory of biological evolution is more than “just a theory.” It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.

So in some ways, a Theory is treated on almost the same plane as a Law, but they really aren’t the same thing. A Theory can still be modified, while a Law is much, much harder to change.  In that first sentence, it says “no new evidence is likely to alter,” meaning you could still alter it, but it’s highly unlikely.

My overall concern with perceptions of what Science is stem from the various debates over climate change, evolution, stem cell research, etc.  In many ways, much of the political hubbub is regarding something that Science isn’t equipped to answer.  By definition, it can only give you a fact – it is up to the individual to decide how to apply their morals to that fact.  Science can tell you that Evolution is happening and that Natural Selection is the current Theory to describe how it happens.  It’s a “Theory” because more data is getting added every day, but the Theory is only strengthened, not weakened.  Overall, Natural Selection is what happens.  End of story.  Scientifically, embryonic stem cells come from an embryo, which is a collection of cells that does not fit the accepted definition of “alive” (i.e. self-awareness, self-preservation, consciousness).  Whether or not you agree that an embryo is not alive is up to you to decide, but arbitrarily suggesting that “Science says that it’s a life” is incorrect and a misuse of the term.  Saying that there are “gaps in the geological record,” so that must mean that God exists and God created the Earth in 6 days, ignores how Science works – God is, by nature, “untestable,” and therefore beyond the purview of Scientific understanding.  These are but a few of the examples of how some would misunderstand Science and try to apply it to things that it shouldn’t be applied to, or at least in ways it shouldn’t be applied.

The Study of Science is a systematic, logical progression that involves the formulation of a testable hypothesis, where testing involves experimentation, observation and collection of data to support or refute the hypothesis.  Hypotheses around a general subject can eventually add up to a Theory, and truly fundamental observations of the natural world become Law.  That’s all it is, folks.  No more.  No less.

God is (Un)necessary

I listened to an episode of On Point on NPR this past weekend, where Tom Ashbrook interviewed Leonard Mlodinow, co-author with Stephen Hawking of a new book titled “The Grand Design.”  I had never heard of Mlodinow before this episode, but I’d certainly heard of Hawking, the theoretical physicist that is confined to his wheelchair as a result of advanced ALS who wrote “A Brief History of Time” back in the 80s.  His first book, “A Brief History…” was relatively short (heck, even I was able to read it) and did a reasonably good job at helping explain to the layman some very advanced cosmological concepts.

Their new book, “The Grand Design,” is set up to answer the question: “Is God necessary?”  Or more generally, does all life in the Universe require the hand of an all-powerful Creator being?  According to their book, the answer is “no.”

Now, as Mlodinow says in the interview, that answer doesn’t mean “there is no God.”  He points this out a few times: Science itself cannot determine whether or not God (or any Creator) exists, but many or all of the questions of Creation can, in fact, be explained by Science.  Hawking was quoted when the book came out as saying that “there is no God,” but that was a mischaracterization of what the book describes.

Interestingly, around the 12:30 mark of the podcast, Ashbrook plays some tape of an interview with Hawking from a few years ago.

Interviewer: “Do you believe in God?”

Hawking: “The basic assumption of science is scientific determinism. The laws of science determine the evolution of the universe, given its state at one time. These laws may, or may not, have been decreed by God, but he cannot intervene to break the laws, or they would not be laws. That leaves God with the freedom to choose the initial state of the universe, but even here, it seems, there may be laws. So God would have no freedom at all.”

While I realize this is something of a cryptic answer, my interpretation is that Hawking kinda believes as I do about this whole “Creation” thing.  Hawking is describing the idea that our Universe is based on a series of Laws (e.g. gravity, the speed of light, etc.) and our Universe is well-suited to the existence of Life (as we know it…).  If the Universe did not have the Laws it currently does, then Life would not exist (as we know it).  Therefore, God set a series of Laws (or adhered to previously existing ones) that allowed for the existence of Life.  Therefore, we humans eventually showed up on the cosmic block.

So yeah, as the authors point out, a Creator may not be “necessary” in a Scientific manner, in that our Universe is apparently set up in such a fashion that Life can and does exist.  From that standpoint, God is “unnecessary.”

However, I would argue that God is, in fact, “necessary” for our lives, at the very least for the social and moral implications.  Sure, God may not be “necessary” for our existence, but He is “necessary” for bringing meaning to that existence.  For providing a moral compass to follow.  For helping define who we are and who we all want to be.  It all depends on how one views “God” (whether in the Christian, Muslim, or Judaic traditions, amongst others), but all faith traditions provide us with a relatively clear idea of the kind of people we should be.  The kind of people we all want to be.

I guess I’ve always felt this way.  I’ve never felt that the “Creation” part of the Bible was all that important to who I was.  The Book of Genesis does not define my life.  It really isn’t important how I was “created.”  However, it’s important that I’m here now.  I do exist, regardless of how it happened.  My existence entails a sense of responsibility that I conduct that existence in a manner I can be proud of.  So for me, God is necessary.

Side-note: Tom Ashbrook asks Mlodinow multiple times to explain how you get “something” out of “nothing,” as in, how exactly did all of the things we know just “spring up” out of the void of existence (e.g. the initial “Creation” itself).  He tries explaining a few times but it was still pretty difficult to follow…may just need to read the book…  I think he was trying to explain it in terms of quantum mechanics in that, according to what we know from quantum theory, you can actually have things just “appear.”  He never said “Heisenberg’s Uncertainty Principle,” but I think that’s what he was getting at.  Heisenberg stated that you can either know where an object is in space or how fast it’s moving, but you can’t know both at the same time.  As I understand the theory, there’s all kinds of math involved that suggests you can actually get “something” out of “nothing.”  Mlodinow also talked about multiple dimensions in his answer.  In short, I don’t really understand it either, but it was addressed in the podcast as well…. 😛

Teaching Experience

About a month ago, the FUTURE in Biomedical Sciences group here at the University held a forum, of sorts, to help answer questions from graduate students and postdocs regarding what it takes to get a job at a Liberal Arts institution, especially in the State of Iowa (where these four individuals reside).  The FUTURE group, now in its second year, has multiple professors from Liberal Arts schools across the state (this year’s participants came from Loras College, Drake University, Morningside College and Wartburg College) come to Iowa City to do research for the summer, learning some new experimental techniques and generally expanding their horizons beyond what they can do at their respective institutions.  The forum was very informative, covering a variety of topics including how to write up your resume, what kinds of places to apply to, what to look for in a school, when to start looking for jobs, and what the jobs tend to be like.  More than anything, however, they all stressed the need for experience: the more experience you have on your application, the better chance you’ll stand against other applicants.  I’m not really looking for another job yet or anything, but it’s really good to have this information at the back of my mind as I keep building up that resume.  Hearing them talk about their jobs makes me want to get to that stage even more, providing me with some much needed motivation to get a few things done while I’m here!

Thankfully, I already have a leg up on that one.  Back at SLU, I had the good fortune of getting to teach in “Drugs We Use and Abuse,” a course run by the graduate students of the Pharm/Phys Department.  It is team-taught each Fall to around 50 non-majors (e.g. Business majors, History majors, etc.) and generally centers around…well…just what it sounds like.  If you ever wanted to learn what meth, cocaine, opiates, depressants and caffeine do to your body, then this is the class for you.  I taught in it for 3 years: I was a section director for 2 of those years and course director for 1 year.  The experience was very good, so much that I decided I want to do it full-time as a career: teach at the undergraduate level.

When I took the position here at the University of Iowa, I asked my mentor if it would be alright for me to continue teaching occasionally alongside the rest of the research I’m doing.  He was kind enough to allow it (if anything, he was excited that I’d take a few lectures off his hands).  This October, I’ll be teaching two classes of Advanced Toxicology, one talking about neurotransmission and the other talking about neurotoxic agents (e.g. cocaine, methamphetamine and ecstasy).  Both of these subjects are within my proverbial wheelhouse, so they shouldn’t take up all that much preparation time.  That, and I have the previous year’s lectures in a Powerpoint file to help me throw something together.  While Drugs We Use and Abuse was directed at non-major undergraduates, this class is for graduate students and there are only 12 in the class, so the dynamic will be quite a bit different than what I’m used to.

I will likely get the opportunity to teach in the Spring as well.  That course is in our department, Medicinal Chemistry and Natural Products, and is also targeted at graduate students (and will likely be just as small, if not smaller).  Not sure when we’ll get that going, but it probably won’t be until January, knowing how things go around here.

Either way, I think I’m doing a reasonably decent job at preparing for what’s ahead, with regards to that whole “career” thing.  At the very least, getting to add a few “guest lecturer” points on my CV is always a welcome addition.

And maybe I’ll even have a little fun doing it.  🙂

Primer: Neurotransmission

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

As I’ve mentioned…oh…countless times, I became interested in my chosen field primarily because of a class titled “Psychopharmacology,” offered by the Psychology Department at Truman.  As the name suggests, the class primarily focused on how drugs modify an individual’s mental state, whether it’s an illicit drug that changes the way you act (e.g. methamphetamine), or one that’s used to help you cope as you carry out your day (e.g. diazepam [Valium]).

Back in June, I posted about Pharmacology, the study of how a drug acts within an organism.  One thing I discussed, but did not elaborate on, was that many drugs function at receptors, and the modification of these receptors is what gives you the desired effect of said drug.  However, in order to understand how these receptors actually do something to your body, you need to understand the basics of how neurotransmission works.

Basically, neurotransmission is a signal sent between two specialized cells called neurons.  These cells make up a large portion of the brain (i.e. there are other cell types, including astroglia and microglia) and provide all the processing power you need to carry on with whatever task you wish.  Therefore, if you want to modify something about that task, these are important cells to consider and/or target with a drug.  Neurons take advantage of channels in their membranes that allow selective transfer of ions like sodium, potassium, chloride and calcium.  When these ions cross the membrane from outside the neuron to the inside (or vice versa), an electrical charge is produced.  These channels open and close selectively to allow certain things through, and keep other things out.  For example, sodium channels in neurons typically allow sodium into the cell, while potassium channels tend to allow potassium to leave the cell.

Many of the receptors that drugs are targeted toward are channels, or the drug-targeted receptors somehow affect the ability of channels to open or close.  Therefore, if you can target your drug toward a specific channel, you can keep it open longer, or close it sooner, allowing you to affect whether a neuron is able to continue propagating its signal.

So, the electrical signal caused by transfer of ions across a neuron’s cell membrane (or “action potential“) travels down the neuron, from end to end.  On one end is the “cell body” (or “soma”) and on the other end is the “axon terminal.”  The electrical signal always goes from the cell body to the axon terminal.  The cell body is covered in “dendrites,” outcroppings of the cell that receive a signal from another neuron’s axon terminal.  Therefore, typically, (1) a signal will start at the dendrites; (2) travel down the axon; (3) trigger a set of events in the axon terminal resulting in (4) the release of a neurotransmitter that (5) crosses the synapse until it reaches another dendrite and (1) starts the process over again.

What happens between the axon and the dendrite can best be described by this image, stolen from Wikipedia:

Neurotransmitters are packaged in “vesicles” that are directed to release their contents into the synaptic cleft where they travel across the cleft to the opposing dendrite, setting off a similar cascade in the next neuron.  There are also “reuptake transporters” in the cleft to help remove excess neurotransmitter, so you don’t have that opposing neuron continuing to fire too long.

Examples of neurotransmitters include dopamine, adrenaline (epinephrine), acetylcholine, nicotine and serotonin.

Now, you probably recognize a few of those neurotransmitters, right?  For example, you probably know that serotonin happens to be very important to your mood.  If you don’t have serotonin, you tend to get depressed.  So what can you do to help combat this deficiency?  Try taking an SSRI (selective serotonin reuptake inhibitor).  That drug targets the “reuptake transporter” in the cleft, allowing the serotonin you’re already making to stay in the cleft longer, helping to activate those neurons to keep your mood a bit happier.

You’d use an SSRI to help serotonin to reach its target neuronal receptors, thereby allowing for increased signal propagation through neurons.  But what if you want to limit propagation of signals, for example, in the case of an epileptic seizure when neurons are firing uncontrollably?  You can use a depressant like carbamazepine.  This drug targets channels and modifies them in such a way that the electrical signal (“action potential“) being sent down the axon is limited, or “depressed.”  It prevents the signal from continuing and, therefore, less (or no) neurotransmitter is released into the synapse.  That same drug can be used to help treat the manic symptoms of bipolar disorder, as well.

So, all of these principles are taken into account (as well as countless others…) when looking for drug targets, and when doctors are prescribing medications.  This is why you can have so many complications when you are prescribed a cocktail of medications, especially when you get older.  If you are taking, say, 10 different medications per day, prescribed by different doctors, it is easy for at least one of those drugs to counteract the effects of another.  There are many factors to consider when prescribing or taking these kinds of medications, as they have effects all over the body.  One simple example is methamphetamine.  This drug targets that reuptake transporter, much like an SSRI does, but it (1) does so for a class of neurotransmitters called catecholamines, and (2) reverses the transporter, rather than blocks it.  The class of catecholamines include dopamine and adrenaline.  So, if you take methamphetamine, you will be increasing the amount of dopamine and adrenaline in your body, not just your brain.  Your heart races because of the adrenaline, and the psychological effects occur because of the dopamine (including its addictive qualities).

In summary, neurotransmission is pretty complicated, but its basics are understandable.  The take-home concepts are:

  • Neurons are responsible for “processing” in your brain, and they use electrical and chemical signals to communicate with each other
  • Many drugs that affect your psychology target the ability of neurotransmitters to “continue the signal” from neuron to neuron
  • Some drugs affect more than one aspect of neurotransmission, and in more than one location