Primer: Cell Death

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

A good portion of my graduate work centered upon how a given cell will die when exposed to a specific toxin.  In order to develop therapies to prevent the death of that cell, the means by which a cell dies is important.  It’s also important how a cell doesn’t die, as I’ll explain later on.

We’ll keep this somewhat simple, though.  There are two (very) basic ways that cells will expire: necrosis and apoptosis.  Necrosis involves the destruction of the cell and, frequently, damage to surrounding cells.  Essentially, the cell ends up swelling and exploding, allowing the intracellular materials to leave and get into the surrounding tissue.  Frequently, necrosis is accompanied by extreme inflammation, causing things like white blood cells/macrophages, the cellular defenders against infections and invaders, to get to that area and try to clean it up.  In the process, they end up creating more damage.  Think of it like a “Scorched Earth” policy of eradication of a given problem.  “Take it out and everything around it to make sure we cleared it up.”

Apoptosis, on the other hand, is thought to be much more controlled.  It is a form of “programmed cell death,” meaning that there are mechanisms built into a cell to allow it to fail properly (unlike the United States banking industry…).  Effectively, when specific signals are received, the cell begins the process of dismantling itself, chewing up its own proteins, shutting down its processes, and packaging itself up for a clean removal by nearby macrophages.  Rather than the “Scorched Earth” means of cleanup, it’s more like putting things in trash bags and putting it out on the curb for the garbage truck to come by and pick them up for you.

Apoptosis is an extremely important process for other things, though.  In the early development of an organism, for example, the neural pathways of the brain and spinal cord are set up such that some neurons will make the proper connection and others won’t.  Those that make the proper connection with their target are strengthened, while those that don’t receive an apoptotic signal to shut themselves down and make way for other neurons.  Cancer, however, is an example of a disorder where the proper apoptotic signals are not received and the cell decides not to shut itself down as prescribed.  Instead, it can’t receive or interpret the signals and continue to reproduce themselves.  Eventually, it gets to the point where even the “Scorched Earth” means of eradication by inflammation doesn’t work.

So in general, your body would prefer to go the “apoptosis” route over the “necrosis” route, as the latter tends to produce quite a bit more damage to surrounding cells and tissues that your body would have to repair afterwards.  Once a cell has started down the path of necrosis, it’s difficult to turn back and save it.  Apoptosis, however, can be limited because it is so dependent upon intracellular signals.

This image is only a fraction of what’s actually going on in apoptosis, but does contain some of the basic signalling mechanisms.  Each of those little acronyms is a protein, coded for by a gene in your DNA.  Some of them are turned on because of a signal sent from outside the cell, while others are turned on when the cell starts doing something it shouldn’t, so it tells itself it needs to shut down and dismantle itself.  However, the key point is that there are ways to use inhibitors toward those proteins to slow down the death of cells, if not stop the death entirely.  Alternatively, in the case of cancer, some of those signals above aren’t functioning properly, and if you can determine which signal isn’t working, you can try to replace it, or “skip over” it and start the signal further down the line.  Think of it as a game of telephone where each of those acronymns above is a person, but “cancer” occurs when one of those people decides not to continue the game of telephone.  We could potentially use drugs to “skip over” that person and keep the game going, or to finish the analogy, to keep apoptosis going.

A lot of what I just said, however, is determined by the ability to personalize medicine.  There are a battery of tests that people are run through when they are diagnosed with cancer, but right now, only a few types of cancer can be targeted in such a way.  Usually, we just go the “Scorched Earth” route, much like your own body does, but instead we use radiation and chemotherapeutics.  Eventually, however, once drugs can be personalized to the individual (e.g. figuring out which person along the telephone line isn’t continuing on with the game), then we should be able to target that cancer specifically and shut it down.  Unfortunately, each person is different and each cancer is different (i.e. it isn’t the same person stopping the game in everyone’s situation: it’s sometimes someone else).  Each cancer has to be checked individually for which signal isn’t working, and that takes lots of time and lots of money.

But science and medicine is getting there.  Slowly, but surely.

The Other Reason(s) For Smartphones

As most people I know, I’m a fan of technological “toys.”  Smartphones are one of those things, however, that I was a bit slower in getting, mostly due to the costs involved.  The phones themselves tend to be more expensive, and you frequently have to have a data plan attached for at least $15/mo with many carriers.

There are obvious reasons that a smartphone can make your life easier, and most of these reasons involve internet access.  Alternatively, they can also make your life more complicated, especially if you detest the feeling of constant connectedness (which I don’t).  I’ve decided, however, to compile a list of reasons that are a bit less obvious to consider a smartphone.

  1. Customization – In many cases, people will get a new phone with a contract renewal and are then stuck with that phone for 2 years until the contract is up.  You can always buy a new phone, but you won’t get the subsidized version, thereby making what was a $100 more like $500 (the price of a reasonable laptop…).  Over the course of 2 years, I tend to get tired of the interface, especially as I’m seeing new phones coming out to supersede mine.  It makes the phone feel old, even though it works perfectly fine.  Smartphones radically change this dynamic.  Phones that run the Android OS, especially, have “themes” that can be installed to completely change the interface, much like you can change the wallpaper, icons, and color schemes on your computer.  In the case of many Android phones, you can even get OS upgrades that provide many new features.  And you can install applications.  In total, it’s like getting a new phone every time you change the theme or upgrade the OS, much as getting a new version of Windows or Linux is like getting a whole new computer.
  2. WiFi – This could seem like an “obvious” or a “less obvious” depending on how you look at it,  I would argue that most people would look to the 3G or 4G radios as being the most useful feature of these phones, yet I find that I hardly use that particular technology.  With AT&T, for $15/mo, you get 200 MB of data to download.  Right now, about 3/4 through the billing cycle, I’ve used about 36% of my allotment, and I’ve actually been using it more heavily than I normally do this month.  This fact will change depending on where you work, but in my case, I typically work around WiFi, and I have WiFi at home.  So for me, the WiFi is a much more useful feature in the phone.  Sure, it’s nice to have 3G available, but living in the Midwest as we do, traveling between Iowa and Missouri, I find that we rarely have 3G access for the whole trip anyway.
  3. Camera – My phone, the HTC Inspire 4G, has an 8 MP camera and an LED flash.  It isn’t the greatest camera in the world, but it’s “good enough” for snapshots.  I don’t use it as a camera replacement, however I find that I’m much more likely to take a picture and upload it to Facebook for all to see, as it’s thoroughly convenient.  As simple as: take picture; click button; select “Facebook;” and then upload.  In the past, I had to grab the camera, take the picture, remove the SD card to transfer the picture to the computer, open the browser, resize the picture, then upload it.  Much more cumbersome, especially for something as “inconsequential” as a random picture of Meg eating her lunch.  Having a reasonably decent camera on me at all times has made me take more pictures of Meg for the sole purpose of posting it online.

Of course, there are countless other reasons to have a smartphone.  I just figure that these are a few that one may not consider as they’re shopping around.  At least, these are the things I find myself most impressed by and using more often than I thought I would (with the exception of the Wifi…I knew I’d use it all the time…).

Primer: Drug-Drug Interactions

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

For a combination of reasons, there are quite a few folks out there today that have a cocktail of drugs pumping through their blood stream.  The elderly, for example, at any given time, can be taking upwards of 10 different medications to manage their back pain, arthritis and blood pressure…and then the depression they feel because they are on so many drugs.  It’s bad enough that they have to be on so many meds, but then when they go to the hospital with another problem, the doctors have to slowly pull them off the drugs they are already on in order to isolate the problem, and then come up with a new cocktail of drugs.  This is especially a problem because so many people have multiple different doctors, some of which aren’t aware of what medications (i.e. type and dosage) their patients are taking.  And those doctors will sometimes disagree with each other and change the medications back and forth depending on which doctor sees them on a given visit.

But that’s a different discussion.  🙂

All doctors and pharmacists are aware of what are called “Drug-Drug Interactions,” which is basically the idea that one drug you are taking can counteract the effects of another, either by directly interacting with the drug itself, or with the receptors that another drug is trying to access.  Very commonly, especially in the case of the elderly, it can also occur during metabolism, the act of breaking down a drug so it can be excreted from the body, and effectively inactivated.

The common example of a drug-drug interaction involving metabolism (as taught in graduate school and medical school) is that of grapefruit.  Terfenadine, for example, was a very popular antihistamine that is metabolized by action of a specific cytochrome P450 enzyme, CYP3A4.  It turns out that components of grapefruit juice (and the antibiotic erythromycin, amongst others) are also metabolized by cytochrome P450, specifically CYP3A4.  In order for Terfenadine to be effective, it has to be converted by CYP3A4 into its “active metabolite” (i.e. the drug that actually helps you isn’t terfenidine itself, it’s the metabolite of terfenidine).  If you are drinking lots of grapefruit juice, you don’t get that active metabolite formed and you keep excess terfenidine around in your body.  Unmetabolized terfenidine, unfortunately, causes arrhythmias of the heart (which is what led to its withdrawal from the market).

So in this case, something as simple as grapefruit juice caused a drug to not function properly, leading to unwanted, and unsafe, side-effects.

Another example of drug-drug interactions via metabolism is the combination of acetaminophen (Tylenol) and alcohol.  Acetaminophen is primarily metabolized by cytochrome P450 isoforms CYP2E1 and CYP1A2 to a compound called NAPQI, which is then further converted using glutathione to innocuous bi-products.  NAPQI can cause severe liver damage if it hangs around too long.  It turns out that the process of metabolizing alcohol also takes advantage of glutathione.  If you are drinking alcohol and you take acetaminophen, it’s very likely that your liver will produce more NAPQI than it can deal with (i.e. due to decreased glutathione levels caused by the alcohol), thus causing acute liver toxicity.

Those are a few examples of how metabolism of one drug can affect another drug.  How about absorption of drugs then, eh?

Tetracycline is an antibiotic that many of us have taken or will take within our lifetimes.  It is formulated so that it tends to have metal ions in with the pills you take.  You shouldn’t take tetracycline along with antacids, however, as antacids tend to also contain aluminum.  Aluminum ions from antacids, or iron from supplements, can form what they call a “chelate” with tetracycline, reducing the ability of your body to take it up into the blood stream.  The same thing happens with calcium ions, so you can’t take tetracycline along with milk, yogurt, or other dairy products.

You can also get what we call “additive” or “synergistic effects” when you take two drugs that do effectively the same thing in a different way.  For example, people take nitroglycerin in order to cause vasodilation, and it does so by producing nitric oxide that then elevates cGMP in vascular smooth muscle cells (ultimately, cGMP is responsible for relaxation of muscle cells, thus allowing your blood vessels to open up further).  Sildenafil (Viagra) elevates cGMP by inhibiting one of its primary metabolizing enzymes.  Moral of the story is: if you are taking Viagra, and you also take some kind of nitrate like nitroglycerin, you can give yourself catastrophic hypotension (i.e. a huge drop in blood pressure).

Warfarin is an anticoagulant with a very small “therapeutic window,” which means that too much or too little of the drug can cause some serious damage to your body.  You have to be very careful when you’re on warfarin, because any variation can cause you to either form a blood clot, causing a stroke, or not clot enough, causing you to bleed out.  Aspirin is a drug a lot of elderly folks take just to help with their heart.  Typically in a low-dose form, aspirin is good to help limit your risk for heart attack and stroke, but if you take any aspirin while you’re also taking warfarin, you can dramatically increase your chances of bleeding, especially gastrointestinal bleeding: taking them together can increase your risk almost four-fold.

All of the preceding examples illustrate how one drug or compound can affect the ability of another drug to work or to be broken down, or in some cases can actually increase the effect of another drug or compound on your body.  The moral of the story is to remain cognizant of what drugs you are on and in what dosage.  Most medical professionals are aware of potential interactions between different drugs, and the examples listed above hopefully illustrate why they need to be aware of what you are taking and why.  If you have elderly parents or grandparents, it is extremely important that they keep a list of medications that they are currently taking with them at all times, especially if they see different doctors for different ailments.  If they were involved in a car accident and needed to go to the emergency room, it would save time and effort to have an up-to-date list of their medications with them, rather than having E.R. docs search to figure out what they are taking.

Of course, if you, yourself, are taking multiple medications now, or know others that are, it is equally important for you, too.  Most drugs will have warning labels on the side of the packaging that help you know what you can take a drug with and what you can’t.

Just bear in mind that, if you really like drinking a vodka and grapefruit juice before bed every night, you may need to tell your doctor before they prescribe anything to you.  🙂

Upgrade Paths, Part 1

Thanks to our relatively hefty tax return, we have a bit of extra cash on hand for me to run an upgrade or two on the computers, upgrades that have been sorely needed for a bit now (though Brooke would probably dispute that…).  For the last few years, I’ve been using laptops as my primary Windows gaming machines, and then a dedicated Linux desktop to act as the server hosting this website.  This has worked out pretty well, however I’m getting to the point (and the age…) where a gaming-capable laptop is less and less necessary, while a gaming-capable desktop is more attractive.  A desktop can be upgraded, while a laptop really can’t to any reasonable degree.  Therefore, I can run reasonable upgrades more often if I have a gaming desktop, rather than a laptop.

My current server uses a dual-core Athlon 64 X2 3800+ with 2 GB of RAM.  The system has worked just fine for the past 5 years since I built it, and has been running almost non-stop since that point.  It’s honestly pretty impressive how well it has held up, considering how long it sits there running without any huge problems.

However, I’m going to use that box and put a different motherboard and processor in it, and will start to use it for gaming.  My laptop (a Core 2 Duo system with a 256 MB GeForce 8600 video card) is well out of warranty and is only barely able to play anything modern, so it’s about time I did something else.  That, however, will be “Part 2” of this particular upgrade.

Since I will use my current desktop computer case, I decided to go with a completely separate system for the new server.  Something smaller and low-wattage was ideal, as the computer doesn’t need to be that powerful to run a web site (as this site doesn’t generate 1000s of hits per day or anything…), and since it runs almost non-stop, something that doesn’t take much power is also a big plus.  The Intel Atom D525 processor fits the bill, as this is the processor found in many netbooks, amongst other devides.  It’s a dual-core 1.8 GHz system, so it will more than do the job, and this particular processor and chipset can utilize DDR3 memory (the current standard).  The box itself, pictured above, is somewhat tiny, only maybe 5″ tall, and will fit snugly wherever I want to stash it.  I’m also going to go ahead and max out its memory with 4 GB of RAM, mostly because they’re having a good sale ($40) on it right now.

In total, this upgrade is under $170.  I’m going to use one of my existing hard drives, and I’m not putting a disc drive in this particular system, so I’m saving some money there.  I am grabbing a new keyboard, however, because Brooke spilled soda in my 10-year-old wireless keyboard…so we may finally get rid of it…  But yeah, $170 for a new system ain’t bad, in my opinion, especially for a system that should be more than capable of running a website for the next 5+ years.

I’ll take care of Part 2 in the coming months.  This upgrade had to happen first, however, to move the website off of the existing computer so I can do other things to it (like…you know…turn it off…).

So hopefully the upgrade will be relatively painless.  If, however, this website is down for a few days, you can turn your ear toward Iowa and probably hear some faint grumbling…

Primer: Drug Discovery

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

There are a few ways to approach the general idea of drug discovery, but I’m going to try and tackle it from the historical treatment first, and maybe revisit it in a future Primer.  I am part of the Division of Medicinal and Natural Products Chemistry at the University of Iowa, and the two components of it, Medicinal Chemistry, and Natural Products, are both integral to the idea of developing new drugs.  Medicinal Chemistry is just as it sounds: the study of designing and synthesizing new drugs, using principles of chemistry, pharmacology and biology.  The idea of Natural Products, however, is a bit more interesting in that, just as it sounds, it studies chemical compounds “developed” in other organisms that may be useful as drugs.

The oldest records tend to cite the ancient Chinese, the Hindus and the Mayans as cultures that employed various products as medicinal agents.  Emperor Shen Nung, in 2735 BC, compiled what could be considered as the first pharmacopeia, including antimalarial drug ch’ang shang, and also ma huang, from which ephedrine was isolated.  Ipecacuanha root was used in Brazil for treatment of dysentery and diarrhea, as it contained emetine.  South American Indians chewed coca leaves (containing cocaine) and used mushrooms (containing tryptamine) as hallucinagens.  Many different examples of drug use in ancient, and more modern cultures, can be pointed to as early forerunners of today’s drug industry.

However, it was the 19th and 20th centuries that really kick-started the trend, as this is when modern chemical and biological techniques started to take hold.  It was in the 19th century when pharmacognosy, the science that deals with medicinal products of plant, animal, or mineral origin, was replaced by physiological chemistry.  Because of this shift, products like morphine, emetine, quinine, caffeine and colchicine were all isolated from the plants that produced them, allowing for much purer, and more effective, products to be produced.  Advances in organic chemistry at the time really helped with the isolation, so these discoveries wouldn’t have been possible previously.

In today’s world, there are a few ways you can go and discover a new drug:

  1. Random screening of plant compounds
  2. Selection of groups of organisms by Family or Genus (i.e. if you know one plant that makes a compound, look for more compounds in a related plant)
  3. Chemotaxonomic approach investigating secondary metabolites (i.e. Drug A functions in your body, then is metabolized in your liver to Drug B, which also happens to be functional)
  4. Collection of species selected by databases
  5. Selection by an ethnomedical approach

I think the latter two are the most interesting, especially with a historic perspective.  With the latter, we’re talking about going into cultures (a la the movie “Medicine Man“) and learning about the plants that they use to cure certain ailments, then getting samples of those plants and figuring out what makes them effective.  It has been estimated that of 122 drugs of this type used worldwide from 94 different species, 72% can be traced back to ethnic groups that used them for generations.

The discovery of new drugs of this type is actually somewhat worrisome as these cultures die out or become integrated into what we’d consider “modern society.”  These old “medicine men” and “shamans” die before imparting their knowledge to a new generation and these kinds of treatments are lost.

The collection of species and formation of databases is interesting, and only more useful in recent history due to the advent of computers that can actually store and access all the information.  With this process, we’re talking about going into a rain forest, for example, and collecting every plant and insect species you can find, then running various genetic and proteomic screens on the cells of each plant and insect to see whether they produce anything interesting or respond to anything.  This process can involve thousands of species across a single square mile in a rain forest, necessitating a great deal of storage space for the samples themselves, but also computing power to allow other researchers the ability to search for information on that given species.

An example of a “screen” that one could carry out would be to grow bacteria around your plant or insect samples.  If you ever heard the story of penicillin, you’ll know that Alexander Fleming (1928) noticed that his culture of Staphlococcus bacteria stopped growing around some bread mold that had found its way into the culture.  From that bread mold, penicillin, was developed as our first antibiotic.  The same kind of principle can be applied here: mix your samples together and “see what happens.”  If anything interesting happens, you then continue investigating that sample until you isolate the compound that is doing that interesting thing.

The isolation of that “interesting compound” can be very tricky, however.  In many cases, a particular anticancer agent or antibacterial agent may be housed inside the cells of our plant species.  Getting that compound out may be difficult, as it could be associated with the plant so tightly that you have to employ a variety of separation techniques.  And even after you apply those techniques, what you are left with may be nonfunctional, as the compound may require the action of that plant itself to work properly (i.e. the compound you want may still need other components to work).  Even after you isolate the compound you want, in order to make it a viable drug, you have to be able to synthesize it, or something like it, chemically in a lab setting.  Preferably, on a massive scale so you can sell it relatively cheaply as a drug to the masses.  These processes can be daunting and costly.

So basically, it can be fascinating to discover new drugs, especially ones that were actually “discovered” thousands of years ago by cultures that have long since died out.  However, you may find that “discovering” the drug may be the easy part – mass producing the drug could be the most challenging aspect of the ordeal.

Choices

Last year, my phone was due for an upgrade, but since Brooke was doing quite a bit more traveling around Cedar Rapids at the time, we opted to give her my upgrade so she could get a smartphone, the HTC Aria (AT&T).  Thus far, she’s been quite happy with this little Android device, a phone that browses the internet, includes a GPS, and accesses WiFi in a variety of venues, obviating the need for a ridiculously expensive data plan (the stock 200 MB/mo plan is $15 extra per month).  Also, this phone was a shade smaller than the iPhone and was much more comfortable for her to deal with.

March 17th, however, Brooke’s phone number will be eligible for an upgrade, meaning that it’s my turn to get a new phone.  Thus, as I’m known for doing (like my father, before me…), I’ve been researching the various possibilities that AT&T has to offer with regards to phones.  For a few years now, the plan has been to go with an iPhone, as the iOS platform has the programs I want and the games I want to play.  For these past few years, Android just hasn’t been able to compete on the software front with the lead that Apple had built with their iPhone system.

This has begun to change.  Quickly.

Now, more and more programs and games are going Android at the same time they go iOS, and many of the original programs that ran on iOS have been or are being ported over to Android.  Thus, recently, I began to reconsider my plan to go with iPhone.

The other nail in the iPhone’s proverbial coffin for me is the fact that the iPhone 3GS is $50 (cool!) and the iPhone 4 is $200 (less cool?).  The iPhone 5 isn’t out, and technically hasn’t been announced, but surely won’t be available until this summer at the soonest.  So, do I get the iPhone 4 this March for $200?  Or do I wait until the iPhone 5 comes out and get it for $200?  Or once the iPhone 5 comes out, get the reduced-price iPhone 4 for $100?  Decisions, decisions, decisions!

The decision, I think, has been made for me, and it’s called the HTC Inspire 4G.

The HTC Inspire 4G just came out for AT&T early this month for $99 with a contract renewal.  It’s essentially a rebranding of Sprint’s Evo 4G, but doesn’t have a front-facing camera or a stand on the back of the phone (for holding it up while you watch videos).  It’s bigger than Brooke’s Aria, although it’s the same brand and is set up very similarly with the user interface and overall construction.  As the name implies, it’s also the first Android-based AT&T phone to get onto their quasi-“4G” network, technical HSPA+, at least wherever the network is available.  It will be capable of taking on the true-“4G” network when it launches later this year, so this phone is pretty well future-proofed for $99.  Not bad.

Brooke and I went by the AT&T Store today in Cedar Rapids to check one out.  I was quite pleased with it, talked with the sales dude about my options with regards to this phone as well as other, comparable phones, and I decided to go ahead and get it.  I was less than 30 days from the upgrade date, so they waived it and let me upgrade early.

I’ve been playing with the phone for most of the day, as I typically do with new toys.  I’ve been pretty happy with it thus far, but will learn more about what the Android platform is capable in the coming days.  I still need to grab some kind of protective case for it, but those aren’t too hard to find.  Otherwise, I think I’ve got the user interface set up the way I want it, but am now trying different apps to see which ones I like for doing the things I want.

Of course, one of the benefits of going Android is the fact that it syncs up quite well with your Google account, so it pulls down my mail, calendar, RSS feeds, etc. from the interwebs with the click of a button.  Very efficient and very helpful for my purposes.  One of the other neat features about this particular phone is the HTC Sense connectivity with HTC’s website, allowing you to not only turn your phone from “silent” to “loud” from the website (in the event that you lose your phone), but also it can remotely wipe your phone of it’s memory (in the event your phone is stolen).  Neat!

Needless to say, I’m having a good time.  🙂

Primer: Scientific Funding

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

One would like to think that major universities spend their money on research for their various faculty members, but unfortunately for me, that typically isn’t the case.  Sure, there is a reasonable amount of money going to fund the research carried out by faculty members in biology, physics, and chemistry departments, but the reality is that in order for that research to occur, and moreover almost all of the important discoveries under the umbrella we call “Science,” money must come from sources other than the university.  In many cases, your tenure and rank at your given institution is determined by how much outside funding you bring in and where it comes from.

The majority of scientific funding in the United States comes from the Federal Government, mostly in the form of the National Institutes of Health (NIH) and, to a lesser degree, the National Science Foundation (NSF) and Department of Energy (DoE).  Scientific American did a great job recently summing up how much money goes into which pot at the Federal level with an easy-to-read graphic that I suggest you glance at.  Basically, the NIH gets $28.5 billion to divide amongst its various projects, including grants that professors and other individuals apply for.  The NSF gets $4.2 billion, and the DoE gets about $3.5 billion to devote to research.  For comparison’s sake, the Department of Defense gets $56.2 billion (excluding special funding in war-time).

Obviously, NIH is getting a substantial piece of that pie.  For the most part, if you are doing biomedical research like I am, then the NIH is the first place you apply to.  They will generally fund anything that you can tie to a disease or disorder.  Alternatively, NSF won’t touch any grant that even implies it could help with disease research, instead focusing on really basic research.  Chemists and Physicists can find applications in the NIH, but usually NSF and DoE (or others) are where they have to look for funding.  And that pot is much smaller than the NIH pot.

The process of applying in each agency varies, but for the most part, you go about it the following way:

  1. Find a grant application that applies to your research
  2. Write the application according to their explicit instructions
  3. Submit the grant by a given due date (usually a few times per year)
  4. The grant is assigned to a division of the agency and then further assigned to a committee
  5. The committee is made up of people who should know what they’re doing, and then rank each grant they get in a pile based on its merits, need, and contribution to science
  6. The committee is given a number of grants that they can fund (usually between 5-20% of total grants submitted)
  7. Funding is decided and you are notified of the decision

There are usually three decisions that can be made.  Either a). the funding agency can grant you the money and accept your project as-is; b). the agency can give your grant a rank or score and suggest you make some changes and resubmit it; or c). they can “triage” your grant, basically saying they didn’t even score it, and that it needs significant work to make the cut.  The committee in question will usually give you some kind of pointers as to why your grant was or wasn’t funded, but that experience will vary across agencies and committees.

The NIH has a few different grant series that you can apply for.  Some, like the one I applied for in early December, are considered “training grants.”  So in this case, the grant I applied for was a post-doctoral training grant (designated “F32”) that would pay my salary for 2-3 years, based on the project I outlined to them.  No equipment or anything would be paid for – just my subsistence.  Alternatively, the “Big Daddy” grant to get is designated “R01,” which is a big league research grant that awards up to $5 million to a researcher and their lab, paying for salaries, equipment, and even some travel money to conferences.  At many big academic institutions, you need to get an R01 before you can achieve tenure.  At some of them, you need two.  The going funding rate for these grants has been in the 8-10% range, which is pretty low.  It’s tough to get an R01 and you can spend a lot of your time writing these grants and trying to get them, rather than actually doing research.

There are alternatives to federal money, of course.  You could call these Private, or “Foundation Grants.”  These entities are frequently not-for-profit groups that are set up to fund research according to their specifications.  The Michael J. Fox Foundation for Parkinson’s Research is one you may have heard of.  The American Heart Association is another.  The grants these foundations fund are typically quite a bit smaller than those funded by the government, rarely reaching in the millions of dollars.  They are also quite competitive, and some could argue more competitive than federal funding.  Generally, you end up spreading yourself thinner across multiple foundation grants if that’s how you have to fund your lab, or you get a single federal grant (or two…).  It all depends on how large your operation is, how many people are under you, and how many projects you have running at a given time.

I’ll leave you with one last point about the funding of science (insert soap box here): the majority of scientific innovations and true breakthroughs come from the funding agencies listed above:  NIH, NSF and DoE.  Private Industry, such as Pfizer or Merck, carry out their own research and development programs, but they rely heavily on basic research carried out in academic settings.  They do this partially because these companies cannot patent what is published in a journal article by someone else, so they have to take other research, apply it to their own needs, and then create a patent that they can make money off of.  When federal funding for science drops or doesn’t even increase with inflation, that means that professors make less money and cannot afford to pay their workers.  That means that less basic research is done.  That means that Private Industry has to devote more money to R&D in order to make new discoveries.  That increases the amount of money they need to put into developing a drug (more on that in a future Primer…).  Finally, that means the drugs and treatments that then go to you cost more money, adding to the sky-rocketing health care costs we already have, mostly because that basic research that Private Industry did is now covered under a patent for 10 years and no one else can make money on it and compete.

Funding of science at the federal level is incredibly important.  It’s hard enough as it is to get a grant, and it is vitally important that the money NIH, NSF, DoE, etc. get does not decrease, but instead increases.  That’s where scientific innovation comes from in the United States.  It’s why people from all over the world come here to get a Ph.D. and do research.  Because the United States values innovation and discovery.

As well they should.

T.M.I.

I have been slowly catching up on podcasts from late last year now that I’m back at work.  I was listening to one yesterday from NPR’s On Point discussing the Wikileaks scandal, but moreover, the world that we now inhabit with regards to leaks, the internet, and overall availability of information.

Toward the end of the segment, the host, Tom Ashbrook, was talking to the former Director of Intelligence, John Negroponte.  He asked Negroponte how we, the United States, would/could deal with a leak like this.  Negroponte answered that they would do their best to prevent it from happening in the first place, placing greater restrictions on the individuals that can access certain information, and then also help re-classify information that should be classified versus that which really doesn’t need to be.  Ashbrook kept pressing him on the matter, asking: “What would you do in the event of a leak?  How would you stop it?”  Negroponte kept going back to “stop it at the source.”  It was getting really annoying to keep hearing the same question over and over, when I kept repeating the answer in my head as often as Ashbrook could ask.

The correct answer?

You do nothing.

There is nothing you can do.  Once the Internet has your information, you’re done.  It’s out there and you can’t stop it.  You can shut down a server or two, but the information propagates to such a degree that you can never fully eradicate any of it.

As happens frequently, this exchange got me thinking about generational differences and their views on the Internet as a whole, specifically to what degree each generation seems to embrace the sharing of information.  [Note: I have talked about this before…]  For those of us that grew up in parallel with the Internet (i.e. it was growing as we were growing), I think the transition was easy.  We learned to live together, gradually sharing some bits of information and withholding others.  We were using the Internet before Google even existed, when all you could do is use Yahoo! to find a website that you had to manually file within their database.  There was no Facebook.  There was no YouTube.  Primarily we were takers of information rather than providers, at least until we became more comfortable contributing to this new ecosystem.

The generation(s) older than me have taken to the Internet at a slower pace (at least in terms of creating new information…), largely because they’re more cautious.  Quite a few folks from those generations are now using e-mail and Facebook, and consequently are now starting to rely on it to a greater degree than ever before.  You can still see the delay in overall adoption in things like smart phones though, where these people are just now starting to get into the mode where they think complete and total connectivity is a necessity.  This is likely because their children and grandchildren are also more accessible, so if they want to contact them, this is how they have to do it.

It’s the younger generation(s) that I’m more curious about.  These people are growing up in a world where the Internet “just exists,” much like air and gravity.  It’s a reality.  It’s something you live with and use.  I guess the difference goes back to information sharing, the older generation never really shared things and stayed more private, my generation gradually let certain things slip and get onto the Internet, and the younger generation never really learned the restraint that should be applied to certain things rather than others.  However, I imagine that these kids are much more attuned into “what should go on the Internet” and “what should not go on the Internet” than I give them credit for.  They’ve seen things happen to their friends when something gets posted that shouldn’t, likely causing them to think twice about their choices.

Personally, I’ve always held the view that whatever I post on the Internet is viewable by The World At Large.  Anything I post on Facebook (and there are quite a few politics-based links I post up there…my views are pretty clear…) can be seen by practically anyone.  Anything on this blog can be seen by absolutely anyone.  Any future job prospects that I have will likely go a quick Google search on my name and this blog will be the first thing that comes up.  They can go back almost 6 years and read all about me, my family and what I’ve been up to.  Am I proud of all of it?  Not necessarily, but I also don’t hide from it.  That information is representative of who I was and who I am today.  If you want a snapshot of Andy Linsenbardt and all he’s about, this is where to find it.  Freely available and open for all to see.

This is also how I view information in general.  Sure, we have an inclination to hide things, but more often than not, we’re trying to hide things that we’re embarrassed about.  I plan on teaching Meg and her siblings someday that the Internet is a very useful tool, but anything you post on it can be viewed at any time.  If you don’t want anyone to see a certain picture of you drinking while you’re underage, don’t put it online.  Someone will find it.  Even if you delete it, it’s saved on a server somewhere that someone can get.  Anything that could potentially embarrass you should stay far away from the Internet.  Really, though, you just shouldn’t actually do things that could potentially embarrass you someday, but that’s another matter…

No matter what generation you come from, “honesty is the best policy” still applies to you.  Everyone is entitled to secrets, but there are some things that may as well be out in the open, freely accessible, so that others know more about how and how to deal with you.  It ends up saving time in the “getting to know you” stage.  You think about better strategies when dealing with others when you know more about them.  Sure, you learn how to take advantage of them as well, but hopefully this kind of openness spreads the naivety pretty thin.

Which brings us back to the Wikileaks deal from last year.  A lot of people were concerned that this information could hurt America’s standing in the world, and hurt our relationships with other nations.  Information that the United States was hiding was perceived as something to be embarrassed about, even if, at first glance, that information was innocuous.  In the end, the complaint that this leak somehow disrupted the fabric of space-time and all is lost is moot: if you really didn’t want that information out, then you should have classified it differently.

However, the larger point is this: perhaps most of that information should have been out in the open anyway.  Much as reading this blog gives the reader some extra insight into me, perhaps a lot of that information provides extra insight into the world we inhabit and the cultures we interact with.

And I don’t see a problem with that.

Primer: Drug Metabolism

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

I chose to work on this subject for December because I may end up teaching a lecture or two on metabolism in early February to pharmacy students.  Obviously I’ll go more in-depth with them, but that isn’t the purpose of these Primers: they are intended as introductions.

Merriam-Webster defines “metabolism” as such:

Metabolism –noun

a.  …the chemical changes in living cells by which energy is provided for vital processes and activities and new material is assimilated

b. the sum of the processes by which a particular substance is handled in the living body

This definition is all well and good, but we’re talking about a specific form of “metabolism” here, one that really is talking about the breakdown of a chemical compound not necessarily for the purpose of generating energy.

Wikipedia provides us with a separate definition for drug metabolism:

Drug metabolism is the biochemical modification of pharmaceutical substances by living organisms, usually through specialized enzymatic systems.

So when we’re talking about an individual, such as an athlete, that has a “strong metabolism,” we’re talking about related but separate processes from the ones typically involved in modification and removal of drugs from your system.

In general, drug metabolism consists of two separate processes known as Phases.  In Phase I metabolism, a given compound is broken down and typically inactivated (but not always, as we’ll see shortly).  It usually involves a specialized protein called an enzyme that removes a specific portion of the compound, rendering it pharmacologically inactive.  Phase II metabolism typically involves the addition of another molecule onto the drug in question, something we call a “conjugation reaction.”  This process serves to also increase the polarity of a given drug.  Usually, we think that Phase I reactions precede Phase II reactions, but not always.

When I say “polar,” I mean it in a sense similar to a planet, in that a planet has “poles” (e.g. north and south).  For the sake of simplification, you can also think of a magnet or a battery instead, with a “positive” pole and a “negative” pole.  In this fashion, chemicals also have a positive and negative charge, including chemicals like water:

In this case, the oxygen atom in water (i.e. H2O) is negative while the two hydrogen atoms are positive.  Therefore, water is polar: it has an end that is more positive and an end that is more negative.  Polar compounds are also considered “hydrophilic” (i.e. “water-loving”), mostly because these polar chemicals tend to dissolve readily in water.

There are examples of “hydrophobic” (i.e. water-fearing) chemicals as well, also known as non-polar.  You know how oil and water don’t mix?  That’s because oils like fats or lipids are hydrophobic and non-polar, made up of molecules that look kinda like these.

These are all examples of hydrophobic (non-polar) compounds, those that do not mix well with hydrophilic (polar) molecules like water.

The key to drug metabolism is to realize that most of your cells, and thus organs, are made up of lipids such as these, so if you have a drug that is particularly “lipophilic” (and thus, hydrophobic), then the drug is more likely to hang around in your body.  That is to say, a drug that is non-polar can hang around longer, affecting you for longer than you may otherwise want.  If you use a more polar drug (i.e. hydrophilic), it’s more likely to get passed out of your body much faster.  Much of your body’s ability to expel chemicals and metabolites depends on the ability of your kidney and liver to get those chemicals and metabolites into a form that works well with water, as water is what you typically get rid of (i.e. urine).

When your body recognizes a foreign compound, such as a drug, it wants to make that drug more polar so it can excrete it.  Thus, your liver contains a number of enzymes that do their best to make those foreign compounds more polar so you can get rid of it.

This process, obviously, impacts the ability of a drug to take action, which is why this process is important.  There’s a reason why drugs are introduced to your body orally (i.e. through the stomach/intestines), or intramuscularly, or intravenously.  If you were to take a drug orally, then it is subjected to what is termed as First-Pass Metabolism.  Typically, when you eat something, the nutrients from whatever you ate are taken up through the portal system and hit your liver before they hit your heart, which only then go on to the rest of your body.  Therefore, if you take Tylenol for a headache in a pill form, it some of it will be broken down in the liver before the heart gets it, and then it gets pumped to your brain to help with your headache.

Alternatively, you could take Tylenol intravenously, which bypasses the liver and thus gives you a full dose.  However, Tylenol is toxic in high doses, so you would never want to inject much of it (or any of it…there are better choices if that’s what you’re considering….) for fear that it could kill you.

The final concept to consider, aside from drug modification, polarity and first-pass metabolism, is how we could use this system to our advantage.  There are times when you take a drug, such as a benzodiazepine like valium (diazepam).  Valium, on its own, is very useful as a depressant, used to treat things from mania to seizures, however the act of drug metabolism produces metabolites that are also active (called, not surprisingly, active metabolites).  In the case of valium, it is broken down in the liver to nordiazepam, then temazepam and finally oxazepam.  Each one of these metabolites is active to some extent, which means that a single dose of valium will last for quite awhile as it’s broken down into other compounds that still affect you.

Sometimes, you can administer a non-active drug that then becomes active once it’s modified in your liver.  We call this a prodrug.  Codeine, for example, is modified by Phase I metabolism to its active form, morphine.  You typically administer morphine to someone intravenously, as it’s rapidly metabolized in the liver.  Codeine allows you to take advantage of your liver to give you morphine in a pill form, which you otherwise wouldn’t be able to do (as it would be broken down too far before it even hit your heart).

In short, drug metabolism is an extremely important process to consider when designing a drug.  You need to take ease of use and route of administration into account, you need to consider whether a drug has active metabolites or not, and you need to be aware of how hydrophilic/hydrophobic a drug is if you want it to remain in your body for any reasonable amount of time.

Primer: Structure of the Brain

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not. So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

I can’t say I’ve been excited about writing this one, as brain anatomy is, quite possibly, the most boring thing I can think of to write about.  I did a rotation at SLU in a lab that focuses on anatomy and how individual brain structures interact with one another, and that 6 week period was more than enough for me.  As that professor told me, it’s very important work that someone needs to do, even if it may not seem all that interesting.  This kind of work is how researchers have figured out which brain component “talks” to which other one(s), and how intertwined all these connections really are throughout the brain.

For the sake of this posting, I’ll simply point out that brain mapping has been carried out in a variety of ways.  Quite a bit of it has been done over decades when people would hit their heads.  If they would lose their memory, or their sense of smell, clinicians could localize the injury to a specific area of the head, then look at the brain post-mortem and see what happened.  Ultimately, they would find a lesion of dead tissue in that region that lead to the deficiency.  Similarly, the study of stroke victims also provided clues to the function of certain brain locations, as a stroke occurs when blood flow is cut off to an area of the brain, typically leading to damage.  Alternatively, modern science uses a series of stereotactic injections of traceable materials in mice, rats and primates that can be visualized in brain slices, showing that a series of neurons in one area are connected with neurons in a separate region of the brain.

It is through this work that certain pathways were elucidated, including the reward pathway (very important for drug addiction, gambling addiction, etc.); the movement pathway (mostly for Parkinson’s disease, but important for voluntary movement, in general); the sensory systems (how the visual cortex signals, the auditory cortex, etc.); the amygdala (figuring out what this structure did and where it went led to quite a few labotomies back in the day); and memory (signals transfered between the hippocampus, the reward system, and the cortex…very complicated network…).  It is through brain mapping like this that helped determine where everything connects together, and which areas are important.

While the human brain is a difficult nut to crack, it can be divided up into different portions.  For the sake of this little blurb, we’ll focus on the three primary divisions of the brain: the prosencephalon (forebrain), the mesencephalon (midbrain) and the rhombencephalon (hindbrain).

The prosencephalon, or forebrain, is further divided into the telencephalon and the diencephalon.  The telencephalon consists, primarily, of the cerebrum, which includes the cerebral cortex (voluntary action and sensory systems), the limbic system (emotion) and the basal ganglia (movement).  As you can see from that list, for the most part, the telencephalon is what constitutes what “you” are: your thoughts, your feelings, and your interaction with the world around you.  It’s where a lot of your processing happens.  The telencephalon in humans is quite a bit more developed than in other species, which is really what separates their brain from other, lesser developed species (i.e. the human telencephalon is what really separates them from a chimpanzee).  The diencephalon, on the other hand, consists of the thalamus, hypothalamus and a few other structures.  The thalamus and hypothalamus are very important for various regulatory functions, including interpretation of sensory inputs, regulation of sleep, and release of hormones to control eating, drinking, and body temperature.

The mesencephalon is comprised of the tectum and the cerebral peduncle.  The tectum is important for auditory and visual reflexes and tends to be more important in non-vertebrates, as they don’t have the developed cerebral cortex that humans do (more on that later).  The cerebral peduncle, on the other hand, is a mixed bag of “everything in the midbrain except the tectum.”  It includes the substantia nigra, which ties into the movement system and reward system.  I think it’s fair to say that, aside from these things, the function of the midbrain, overall, has yet to be fully determined.

The rhombencephalon is quite important, even though it’s probably the oldest part of the brain, from an evolutionary standpoint.  It includes the myelencephalon (medulla oblongata) and the metencephalon (pons and cerebellum).  The medulla oblongata is important for autonomic functions like breathing and heart function.  The pons acts primarily as a relay with functions that tie into breathing, heart rate/blood pressure, vomiting, eye movement, taste, bladder control and more.  Finally, the cerebellum is important for a feeling of “equilibrium,” allowing for coordination of movement and action, timing and precision.

As you may have noticed, if you go from back-to-front, you’ll get increasing complexity in brain function.  For example, the hindbrain is important for very basic things like breathing, heart rate, and coordinated movement.  These are functions that are important in nearly all organisms, but especially so all the way down to the smallest worm and insect.  Further up, the mesencephalon starts to work in further control of reward and initiation of voluntary movement, giving the organism voluntary control rather than solely reflexive control.  Then, the diencephalon starts acting like a primitive brain, working in regulatory functions and more complicated reflex action to help maintain the more complex organism that has been assembled.  And finally, the telencephalon yields the ultimate control over the organism, with things like memory, emotion, and greater interpretation of sensory inputs.  As the image above dictates, the hindbrain (to the right-hand side) remains a large portion of the brain in the rat and the cat, but the human forebrain (the top/left-most portion) gets much larger, relative to the hindbrain.  With that size comes greater development of brain structure and function.

So yeah, the brain is kinda complicated.  Actually, it’s really complicated and, for the most part, I do my best to ignore all of the complex wiring networks that occur within.  However, it is important work that needs to be done in order for surgeons to do what they do, and for neuropharmacologists to develop drugs that target some brain areas and not others.  For the most part, I’ll leave this research to more interested people…