Category Archives: geek stuff

Empiricism vs Rationalism

As part of the grant I’m on at work, I am expected to attend “continuing ethics training” each year.  Last Wednesday was the first of two sessions, each a little over an hour long, and I ended up presenting a case study to the other folks in the room regarding the way science is conducted and how it is perceived by the general public.  This past Wednesday, however, we had a guest speaker in the form of Stephen Lefrak, a pulmonary physician that also has research interests in medical ethics.

He covered a range of subjects, but he specifically highlighted a series of studies he was involved with over 10 years ago, studies published in the New England Journal of Medicine, among other high profile journals.  Studies funded by the NIH and carried out by the National Emphysema Treatment Trial Research Group (NETT).  These studies involved a surgical procedure for patients with emphysema, where portions of the lung with damaged tissue would be removed, and the rest of the lung (presumably healthy tissue) would be restructured to form a better-functioning respiratory organ.  Lefrak and his colleague here at Wash U were involved early on with the trial, but left after they had serious ethical concerns, one of which centered on the idea of a “randomized controlled trial (RCT).”

For the sake of simplicity, an RCT is essentially the idea that you apply one of two (or more) potential treatments to a given individual, and that individual is selected at random from a given group.  In this case, the treatment was the surgical removal of lung tissue (presumably damaged) in order to refashion a healthier lung, and the group was emphysema patients.  However, and importantly, it was known at the time that you can’t just do this to someone that has lung damage spread throughout the lung: it only works if there is healthy tissue still in there to salvage.

Lefrak knew it wouldn’t work if the trials were carried out at random (i.e. paying no attention to the quality of the patients lungs, or whether they had healthy lung tissue remaining, or whether they had a “homogeneous” mix of damaged and undamaged tissue).  However, when this concern was raised in the pages of NEJM, he was essentially told that he couldn’t ”know” it because an RCT had not been done to prove it.

As a result, almost 50% of the patients it was tried on ended up dying, for the very reason Lefrak and colleagues warned them about.

Which brings us to the title of this post: empiricism vs rationalism.  ”Empiricism” is what drives the belief that an RCT is essential to making the claim that this kind of lung surgery is “dangerous” to a subset of individuals.  ”Rationalism” is behind the idea that we actually know things about how the body works and can make an informed inference as to what the outcome would be without having to do the RCT to “prove” it.

The example Lefrak gave is that an RCT to prove that you need a parachute to jump out of a plane would be silly.  We already know the answer.

As Lefrak talked about his experience, it got me thinking about where our knowledge comes from and how we build upon it.  Whether I concern myself, personally, with “evidence” more than I should, without thinking rationally about a particular subject in order to come to a conclusion.  I’d consider myself to be a “rational” person, but perhaps not.  Then again, as he described what the surgery was seeking to do, my physiology training assured me that I would have been on his side from the beginning, rather than advocating the continuation of the NETT work.

It’s just something we, as scientists, ought to consider more often than we typically do, I guess.

To SSD, or not to SSD?

Last year, my laptop died.  Rather than replace it, I opted for upgrading my desktop PC to make it gaming-capable, among other things, as it tends to be far cheaper and is much, much easier to upgrade when components go on sale.  At the time, I did the bulk of the upgrades, but I didn’t get new hard drives, as they were still functional and I didn’t think they were as important to spend extra cash on when I could put that money into a new processor or RAM.  So, since that time, I’ve been using a previous-generation hard drive on my next-generation motherboard.

The drive I was using was 160 GB, so not exactly a large capacity to work with.  As lots of stuff is moving toward cloud-based storage, and as we have a 400 GB external hard drive, 160 GB was still enough to do most things, though it felt “cramped” at times.  Hard drives are relatively cheap things to upgrade, where you can get a 1 terabyte hard drive (that’s 1000 GB) for about $100, and frequently cheaper.  However, that upgrade would give me all kinds of capacity, but not a huge jump in “speed.”

There are a variety of reasons for this, but part of it is that traditional hard drives actually have spinning parts, much like a record player.  As an illustration, in the image above, you can see the compact disc-looking thing, and what also looks like a needle.  Obviously, the drive’s operation is far more complicated than “it’s just like a compact disc,” but in many ways, that’s really all it’s doing.  Bigger and faster, but the same basic concept (well, and without lasers…).

Enter the “solid state drive,” or “SSD.”  Unlike a regular hard drive, this one has no moving parts.  In fact, it works much more similarly to the SD card you put in your camera.  For this reason, these guys tend to be fast in comparison with a traditional drive.  However, the cost is also far higher when in a “price per gigabyte” paradigm.  The highest volume SSD I can find sits at 960 GB, and is running $3,150 right now.

In order to run Windows and an array of programs (comfortably), you need over 100 GB, and then a second drive to store your pictures, videos, music, documents, and so on.  Thus, when this 120 GB drive from Mushkin hit $100, I was ready to take the plunge.  $100 for 120 GB was my “benchmark” price for such a thing, when it would be worth it to spend the cash on a low-capacity device when I could get 1 TB in a traditional drive for the same money.

After some hiccups concerning the cable I was using, I finally got the thing installed this past Sunday, up and running with Windows 7 Ultimate, a variety of games and “useful” programs, and a formatted 160 GB traditional hard drive (my old one) to be used exclusively for media storage.  In running a Windows-based test on my various components, where the old hard drive was definitely limiting in my overall performance, now my drive is the fastest thing in there, and my processor is what’s lagging (though not my much).  The computer boots up and is ready to use in about 20 sec, which is far faster than the minutes it used to take.

Overall, I’m a believer.  Where people used to say “add some RAM to ‘pep up’ that old computer,” the SSD is, increasingly, what people are going to suggest.  For $100, you can improve your computer’s speed to a ridiculous degree, turning it into the speed demon it once was when you first bought it.

 

 

Upcoming Movies

The last two years have yielded something of a famine with regards to summer movies I’m excited to see.  To be fair, the last two years have also encompassed this little thing called “fatherhood,” so I haven’t exactly had the time or money to go see as many movies as I used to.  That, and living in Iowa away from my usual movie buddy made it difficult to get to see the flicks I wanted to check out.

To be fair, last year especially didn’t really have much I was excited to see.  Within the realm of comic book features, movies like Thor, Captain America and Green Lantern didn’t really entice me to find someone to go to the theater with.  I caught most of these movies, and others, through Netflix rentals in the Fall and Spring and I don’t really think I missed all that much.

That said, now that we’ve made our triumphant return to St. Louis, I thought it best to outline the movies I’m excited to go see this Summer, provided The Wife (…and Josh’s wife…) will allow such things…  :-)

  • The Avengers (May 4, 2012) – This one is gonna rake in tons of cash, if only for the slate of actors they’ve got lined up.  Just about everyone is in this movie and it promises to blow up everything in sight.  Definitely a great way to kick off the summer blockbuster season.
  • Men In Black III (May 25, 2012) – To be honest, I don’t like the idea of effectively replacing Tommy Lee Jones with Josh Brolin. Then again, if you wanted a young looking Tommy Lee Jones, you could do worse than Josh Brolin.  I loved the first movie, but didn’t particularly care for the second one.  We’ll see how this one turns out, I guess, but I’ll probably end up seeing it.
  • Prometheus (June 1, 2012) – Billed as a loose prequel to the Alien franchise, Ridley Scott returns to sci-fi horror after a long absence.  This one probably won’t bring in the bucks as the others on this list, but I expect it’ll still be pretty awesome.
  • The Amazing Spider-Man (July 3, 2012) – I like me some Spider-man, and this re-boot takes the story back to the beginning with Andrew Garfield as Peter Parker and Emma Stone as Gwen Stacy.  When I heard those two names announced, I was a bit apprehensive, but Stone’s good in just about anything she’s in and Garfield was good in The Social Network, so I’ll cut him some slack.  That, and at least in the clips I’ve seen, he seems to pull off the “wit” of the character a bit more convincingly than Tobey Maguire did.  Call me “optimistic” on this one.
  • The Dark Knight Rises (July 20, 2012) – Uh.  I don’t need to write anything here really.  While Batman Begins was a great movie, The Dark Knight practically redefined what a “comic book movie” could be.  I will be shocked if this movie is anything less than stellar.
  • Total Recall (August 3, 2012) – To be honest, I haven’t seen the Schwarzenegger version in quite awhile, but the trailer for this one, this time with Colin Farrell, could be good.  The effects look pretty sweet and it’s got a good slate of actors.  My only concern is that Len Wiseman is directing it, mostly known for the Underworld franchise, so while I’m hopeful this movie turns out to be good, I won’t be too surprised if it’s “middling,” at best.
  • The Bourne Legacy (August 3, 2012) – So, as I was compiling this list, I saw this movie coming up.  I’d heard they were continuing the franchise without Matt Damon, but didn’t realize it was coming up already.  Jeremy Renner will be carrying on as a new character, though some old favorites from the previous movies will show up, too (Renner is also in The Avengers, earlier in the summer, so he’s packing quite a payday this year).  It’s a strong series of movies, so as long as they stick with the fiction, it’ll probably be alright.  There’s a bit of concern, though, as Paul Greengrass isn’t directing these (he did the previous three), but it is being directed by the guy that was involved with writing the earlier movies, so at least there’s some pedigree there.  Again, I’m hopeful for this one.

 

Primer: Electrophysiology

These posts, tagged “Primer,” are posted for two reasons: 1). to help me get better at teaching non-scientists about science-related topics; and 2). to help non-scientists learn more about things they otherwise would not.  So, while I realize most people won’t read these, I’m going to write them anyway, partially for my own benefit, but mostly for yours.

It’s been awhile since I posted one of these, but as I’m working on radically different science than I have in years past, and people ask me “what I do,” I figured I should take the time to explain, to some degree.

Wikipedia defines “electrophysiology” in the following way:

Electrophysiology (from Greek ἥλεκτρον, ēlektron, “amber” [see the etymology of "electron"]; φύσις, physis, “nature, origin”; and -λογία, -logia) is the study of the electrical properties of biological cells and tissues. It involves measurements of voltage change or electric current on a wide variety of scales from single ion channel proteins to whole organs like the heart. In neuroscience, it includes measurements of the electrical activity of neurons, and particularly action potential activity.

So, in the most general sense, I’m “listening to neurons talk to each other,” and occasionally, “interrupting their ‘conversations’” in various ways.  When I talk about “conversations,” I’m referring to the act of neurotransmission, whereby one neuron sends a chemical signal across a synapse to another neuron, resulting in the propagation of that signal (an action potential), or sometimes the inhibition of another signal.

As I talked about in a previous primer, in order for an action potential to occur, various ion channels in the membrane of a neuron must open, allowing sodium (Na+) from outside the cell to come in, and potassium (K+) to go out.  Other ions will play roles as well, including chloride (Cl-) and calcium (Ca2+).

Using electrophysiology, it is possible to measure the movement of these ions across a cell membrane using relatively simple principles of physics.  Specifically, [V=IR], or [voltage = current X resistance].  If you hold two of the terms of this equation constant, it is possible to determine the third term.  Effectively, we do this using a “patch pipette,” a small, sharp, glass tube that has a wire electrode running through it.  If you know the resistance of the pipette, and you hold the electrode at a constant voltage, you can measure the current across the membrane of a cell (i.e. the flow of ions).

In short, this diagram describes the actual process of making this measurement, using a technique called “patch clamp“:

Looking through a microscope (like the one pictured above), you move one of these glass electrode pipettes to be just touching the membrane of a cell.  You have to be very careful so you don’t puncture the cell, thus damaging the cell membrane to the point where you can’t make accurate measurements.  You then apply a small amount of suction using a syringe to actually suck some of the cell membrane inside the pipette.  Once you have a strong seal formed (typically termed a “gigaseal”), you can apply a brief, large amount of suction with your syringe to rupture the membrane of the cell, where now, the inside of the cell is being exchanged with whatever you put on the inside of the pipette.  The internal solution of a pipette is usually something like potassium, basically trying to recreate what the inside of a cell would be, aside from all the organelles, however you can add compounds or drugs to manipulate the actions of channels you are trying to study.  Typically, though, you apply drugs to the outside of the cell, as well.

So, a real-world example of how this technique is used would be in my study of NMDA channels.  The NMDA receptor is a sodium channel and is very important in neurotransmission, but especially in memory.  When I have a cell “patched” like in the diagram above, I can apply the drug, NMDA, to the cell and see a large sodium current on my computer screen, kinda like this one.

So, over time, when a drug like NMDA or this “Blocker” is applied, you can see a change in the current (measured in “picoamps”) across the membrane of the cell.  In this case, we would read these data such that NMDA opens its channel and sodium ions flood inward, then that current is reduced by the “Blocker” that was applied for a few seconds, and then once the application of the “Blocker” was stopped and NMDA alone was applied to the cell, the inward sodium current increased again.

These traces allow you to get information about how channels are opening, what ions are flowing in what direction, and to what degree drugs like this “Blocker” are affecting channels.  It is work like this, for example, that led to characterization of benzodiazepines and barbiturates, drugs that interact with the GABA receptor, a chloride channel.  Without these techniques, it is difficult to know how a drug is affecting a channel at the cellular level.  Just about every cell in your body has channels of some kind, as they are very important for maintaining the function of that cell.  Neurons are just highly specialized to require ions more than some other cells do, though heart cells are also studied in this way, among others.

Effectively, these techniques allow you to determine how a cell works.

Protip

"Hello. My name is 'Google Reader.'"

I’m fully aware that many believe I sit in front of a computer all day and stare at Facebook, posting articles and comments and shirking actual “work.”  In actuality, I’d argue that I only have “http://www.facebook.com” on my web browser 15 min per day, on average.  On a “busy” day, when I’m in the middle of a conversation/argument, more like 30 min.

How is this possible, you ask?  Why, it’s the power of RSS readers!

“RSS” stands for “Really Simple Syndication,” and the idea for it goes back as far as 1995, though the first official version was integrated into Netscape in 1999.  In many ways, RSS is what gives blogs the power they have today: the ability for the headline and a brief description of an article or posting to be “aggregated” for easy digestion by the reader.

Note: This very blog has and has always had an RSS function.  That’s what the cute little orange icon in the upper-right corner that pops up does.

So here’s the secret:  I’ve got 45 different blogs aggregated into my Google Reader account.  This means that my phone, my Kindle Fire, my Chrome web browser, and the Reader website itself all tie into a single repository that collects new posts from each of these sites almost immediately after a new article is posted.  I’ll wake up in the morning and have 75+ articles to wade through, to see if there’s anything interesting, and I can do this easily on my phone, swiping with my finger to scroll through the list.

Any articles I think may be interesting (based on the title, usually, but sometimes after checking the description), I will press to add a “Star,” effectively bookmarking it for later reading.  Then, I can just click “Mark All Read” and my list is cleared out, ready for re-population.  Once I sit down at a computer somewhere, or with the tablet, I will then skim the articles I found to be most interesting.  And sometimes, I’ll share relevant articles on Google+ or Facebook.

So, quite rapidly, I can skim through articles from the St. Louis Post-Dispatch or the Columbia Daily Tribune without ever having to actually visit the sites themselves, thus avoiding ads and thus saving me time.

And furthermore, you can share articles to Facebook or Google+ directly from most of these blogs, as this is how they generate their traffic.  You just have to click “Share” from the page in question, or from within Google Reader.  A little box shows up and you write what you want to post, along with the link.  And you never have to actually go to Facebook.com to do this.

So yeah, a little “protip:” use an RSS reader of some kind to make your blog reading more efficient.  You are more than capable of getting information throughout the day without getting bogged down in Facebook or on blogs themselves.  You can, in fact, get work done and still provide useful information on subjects that interest you.  It really isn’t that hard…