Another Reason to Buy American

A few months ago, I started listening to This American Life, a weekly Public Radio International show typically broadcast on NPR (Sundays around here, I think). Back in late July, they broadcast an episode about “patent trolls” that was particularly engaging, so I’ve been hooked ever since.

Last week’s episode, which I highly suggest you listen to, focuses on manufacturing in China, specifically, of products in Shenzhen.  Products from Samsung, Dell, HP, and more specifically, Apple.  Mike Daisey is something of a story-teller, so he gets up on stage in front of live audiences and talks in one-man shows.  As an Apple lover, he expounds upon his history with their products and how he always sought to understand how his iPad, iPhone, MacBook Pro, etc. worked, even going so far as to take his laptop(s) apart, clean them, and put them back together.  Through various circumstances, it occurred to him that he knew very little about how these products were actually made, however, so he took a trip to Shenzhen to visit the Foxconn plant where practically all Apple products are manufactured.  As others have reported in the past, he found harsh working conditions, that unions were illegal, and that underage girls were employed in the factory.

More to the point of what I’m getting at, Daisey says that Apple is actually doing relatively well with their manufacturing practices, holding yearly audits, requiring that their manufacturers follow strict guidelines, and so on.  Others in China and Southeast Asia, as a whole, aren’t as careful.  Some have even suggested that, while these practices are obviously unfortunate, in many ways, it still provides a better living than these individuals had prior to industrialization.  And furthermore, in many ways, these countries are currently ascending much as the United States did in the Industrial Revolution.  It’s something of a “growing pain” that countries must go through before they can decide what work practices will be most efficient for the company, and most beneficial to the worker.

This issue is something I’ve never associated with the idea behind “Buy American,” or at least, “Buy From Companies You Know Are Providing Some Level Of Non-Exploitative Treatment To Their Workers.”  Many (most?) manufacturing plants in North America are pretty good about treating their workers fairly, with some limit on hours worked, over-time pay, a minimum wage, and so on (depending on unionization and other factors, of course).   There are a variety of European companies that do as well or better in the treatment of their workers, and I’m sure there are even some in Asia that do right by their employees.  While I’m suggesting a focus on looking into the manufacturing processes of companies we tend to buy from, I just see the whole endeavor as another reason to just Buy (North) American.

Up until now, I always thought of it as an economic issue, to keep our money here rather than sending it overseas.  Increasingly, this is difficult as manufacturing jobs have all but left the U.S.  Even when we “Buy American” in things like cars, they’re only assembled here: all the individual parts are built/assembled overseas.  But after listening to this particular story, it makes me consider other reasons to try buying American-made/grown products, where feasible.  Unfortunately, it’s probably impossible to buy a TV, an MP3 player, a computer, or a phone that was assembled, let alone built, in the U.S.  I guess I’d like to see the “Buy American” ideal extended so it not only encompasses the economic need to keep our money here, but also the need to extend the rights of workers and the belief that each individual has value to the countries that make all the “stuff” we keep buying.  Perhaps something like the “Fair Trade” label used on food products from around the world: a certification process companies can apply for to provide some degree of protections for the people they employ.

I dunno.  I just never really thought about the concept of “Buy American” as a way to reward companies that treat their workers well.  Perhaps we all should.

Edit: In mid-March, This American Life had to retract their initial report, listed above, saying that Daisey had fabricated enough portions of his monologue that they deemed it unfit for their journalistic standards.  Generally speaking, things like chronology, specific interviews, and certain details were fact-checked with his translator, Cathy, who told This American Life that it didn’t all happen in that order or in that way.  They interviewed Daisey again in the podcast from that week, who felt badly for the ordeal, but wanted to make sure people realized that the things he said are “true” in that they happened at Apple plants: just not necessarily on his particular visit.

Fred Flintstone Wants To Kill You

I’m slowly catching up on podcasts from the last few weeks when I wasn’t really in Podcast Listening Mode, and recently, I listened to On Point’s discussion on recent research on vitamins.  Much of the discussion focused on recent reports suggesting that over-dosing on vitamins for years could do more harm than good.  Specifically, they discussed a recent study called the Selenium and Vitamin E Cancer Prevention Trial (SELECT) where men took the daily recommended dose of Vitamin E and were found to be 17% more likely to develop prostate cancer over the 7 years they were followed.  This news comes after another recent study from the Archives of Internal Medicine suggesting that multivitamins, folic acid, and iron and copper supplements may increase mortality in older women.

This all reminds me of what Dr. Shaffer told us in psychopharmacology class back at Truman: you don’t need vitamins if you eat a healthy diet.  Human physiology is set up to absorb the nutrients you need and get rid of the ones you don’t, provided you eat the diet your body needs to survive.  This includes vegetable, dairy, grain and meat sources.  If you start removing any of those sources of food, you either a). replace those nutrients with something like a multivitamin, or b). die sooner.  Apparently, however, new data like those referred to above suggest that even with the replacement of nutrients, your body still may not be very happy with you.

Brooke and I talked about this a few days ago and we both had a question about Folic Acid (Vitamin B9) intake, as this is one of those vitamins pregnant women are instructed to take to limit the risk of congenital malformations of children, including spina bifida and cleft palate.  The recommended daily allotment of Folic Acid is between 400 and 800 ug for a pregnant woman per day, though your doctor may prescribe more if there’s a history of problems in your family.  Bear in mind, however, that it’s important that women of child-bearing years have Folic Acid in their diet or take supplements before they are pregnant, as it’s more important in the early stages, before many women even know they’re pregnant.

Speaking of which, what are the ways to get Folic Acid in your diet, aside from a pill?  Spinach, peas, beans, egg yolks, sunflower seeds, white rice, fortified grain products (e.g. pastas, cereals), livers and kidneys, among others.  Now, I ask you: How many women between the ages of 18-25 are eating anything from that list on a daily basis?  I’d guess not very many.  They’re probably going to get most of it from breads and cereals, though the recommended daily allotment of folate is added to the product: it’s not endemic to wheat.

(Side-note: The U.S. government, on their Women’s Health fact sheet, says that vitamins are still essential to ensure you are getting the daily allotment of folate every day, and that it’s possible to do so by diet alone, yet difficult.  Anyone reading this should go by what their doctor tells them.  I’m only using folic acid as an example.  I am, by no means, a medical professional.  :-))

I guess my larger point is that vitamins are alright, but trying to rely on them in order to avoid eating foods that we as Homo sapiens have evolved to require over millenia is unwise.  It’s more important that we get proper dietary sources of vitamins and minerals that our stomachs have “learned” to take advantage of for generations.  This isn’t to say you should only eat organic food, or only eat food that you grow yourself.  Sure, organic sources can be healthy, but I’d argue that it’s better you eat your broccoli every day regardless of whether it’s organic or not.  Women of child-bearing years should be eating food from the outlined sources above anyway.  Men at risk of prostate cancer should be eating grapes, leafy green vegetables, and avoid trans fats anyway.  Heck, regardless of whether you’re “at risk” of prostate cancer or “at risk” of becoming pregnant, these are things you should be eating anyway.

So yeah, I don’t really think that vitamins are that bad for you.  But what is bad for you is trying to rely on them, or other supplements, as a substitute for a healthy diet.

(Final Note: An actual medical professional posted this article up on Huffington Post to help assure people that they shouldn’t necessarily stop taking all their vitamins and that there are some flaws in the conclusions being drawn from these studies.  As with anything in science, more studies are needed to come to any real conclusions on this matter)

What “The American People” Want

I’ve been paying attention to this fight over the debt ceiling to an extent.  Not a huge one, not a small one: just “an extent.”  I certainly have my view on the subject (i.e. make some cuts to entitlements, raise revenues on the top 5%), but that’s not what this particular post is about.

This is about the mythical “American People.”

I listen to NPR’s “On Point” program on a regular basis and, on more than one occasion, they’ve had politicians on talking about what “The American People” want.  They apparently want a balanced budget.  They want the government to act just like a family does.  They want Big Business to pay their fair share.  They want to be Pro-Life.  They want to be Pro-Choice.  They want to lower taxes.

Where are these people?

Frequently, when politicians talk about “The American People,” they’re talking about The Majority.  They fail to mention that The Majority only represents 51% of the actual voting population of America: there’s another 49% that’s statistically just as big.

With regards to the debt ceiling, let me just go ahead and summarize what the actual American People want for those politicians that apparently don’t know:

  1. They want their social security to stay the same.
  2. They want their medicare/medicaid to stay the same.
  3. They don’t want more taxes.
  4. They don’t want wasteful government programs.
  5. They want roads, bridges, police, fire fighters, clean water and constant electricity.
  6. They want a job they like.
  7. They want to be paid more than they’re currently getting.
  8. They want to buy the stuff they want with the money they make from the job they have.
  9. They want their kids to go to good schools and get the education they want.
  10. They want to live the lives they want to without the government interfering (or their neighbor, for that matter).

There are probably more things I could list, but this is a good start.  The American People just want things to continue going as they are, or to go better.  They don’t want things to change, unless they will get better.  “The American People” that most politicians seem to be talking about don’t actually exist, except in the polls they use to win their election.

I’m getting a little tired of “The American People.”  I want The American People back.

Accepting Religious Curiosity in Context

I was catching up on NPR’s “On Point” from February 16th, where Tom Ashbrook was interviewing Richard Watts, author of various books, the most recent of which is “Hungers of the Heart: Spirituality and Religion for the 21st Century.” The entire podcast is worth listening to, but toward the end, Watts and Ashbrook got into some interesting territory.  In general, Watts is very interested in “the historical Jesus,” looking at the man and historical record and the context in which the Bible was written, as opposed to focusing on what could be considered the more “mystical” aspects of the Bible.  We pick up this transcript as Tom Ashbrook is reading a comment off the internet:

Ashbrook: “…but then here’s Elmridge who says of you ‘but he is denying the divinity of Jesus.’  What about that, Rev. Watts?”

Watts: “Well, one of the things we need to do is we always need to read texts in context.  When we don’t do that we get into big trouble.  Now, for example, if I tell you that there’s…that I know someone in the first century that’s called ‘divine,’ ‘the son of god,’ and ‘the savior,’ you know, who do you suppose I’m talking about?  Well, most people would say you’re talking about Jesus.  No, I’m talking about Caesar Augustus.  Caesar Augustus received all of those divine titles, and so when Christians talked about Jesus and what they had encountered in his life, they used titles which were very counter-cultural, they were saying, look, if you want to know what life is about, if you want to know what real power is, if you want to know where divinity is, look at this peasant going around talking about creating a new community of compassion and love.  Don’t look for the seat of power for the emperor in Rome.”

Watts: “Christianity in the very beginning, before it was called ‘Christianity,’ Tom [Ashbrook], it was called ‘The Way.’  And it was a way of life.  It was a lifestyle.  It doesn’t mean that lifestyle was devoid of a basis of belief, of course it was, but it was a lifestyle long before it was a creed, and I think we need to get over our hang-up with absolute creeds and get back to the lifestyle, a lifestyle which is non-violent, which is compassionate, which is inclusive, which creates community rather than holding people off at arm’s length.”

There was another interesting exchange later in the podcast.

Ashbrook: “You know very well, as do many other preachers, that the kind of mainline protestant churches that you’re describing that may be most open to this kind of open-minded, liberal conversation, are the ones that have seen their attendance just go through the floor in the last decade.  Now why is that?”

Watts: “Part of that, that’s a great question, and part of the problem is, that, I have to lay at the feet of clergy.  It seems to me that an awful lot of clergy don’t bother to teach the people what they themselves have learned.  And so, people are sort of fundamentalist by default because they, these sorts of questions that you and I are talking about today are not often raised and I know that in mainline churches there are all kinds of people sitting there never receiving permission to raise their questions, never having the opportunity to engage in give-and-take about what their life experience has taught them, or what their life experience has asked them.

Let me tell you a very brief story.  There was a scholar in the Jesus Seminar, which works to uncover the facts about the historical Jesus, that was giving a talk to a group of Missouri Synod Lutherans, a very conservative denomination, and he was talking about New Testament documents, and the document “Q,” which is a lost document of the sayings of Jesus.  And then came time for the question period and he wondered, he felt like Daniel in the lion’s den, and a woman stood up and, instead of addressing the speaker, she turned around to address her preacher in the pew behind her and she said to him, ‘did you know about Q?’ And he said, ‘well, yeah.’  And she said, ‘why didn’t you tell us?’  And I think that’s a very powerful parable, that our churches are full of people that are questioning, who are curious, but who aren’t being adequately taught…”

I won’t write much about this, as the post is already long enough with these transcripts included.  I just wanted to say that this is the kind of thing I like to hear about, the historical context in which the Bible was written, and how that context can help inform what we know and what we don’t know about religion.  Moreover, I think that if there was more of this being taught in our churches today, there would be fewer “black and white” interpretations of what the Bible tells us, and we would all be more accepting of each other.

It’s a shame when pastors and educators shut down the intellectually curious.  We should all be fostering curiosity in ourselves and in our kids in order to better understand where we come from and who we are, rather than asking someone to tell us, and then accepting that information blindly.

Questioning and thoughtful investigation is the way of science.  It should be the way of religion, too.

T.M.I.

I have been slowly catching up on podcasts from late last year now that I’m back at work.  I was listening to one yesterday from NPR’s On Point discussing the Wikileaks scandal, but moreover, the world that we now inhabit with regards to leaks, the internet, and overall availability of information.

Toward the end of the segment, the host, Tom Ashbrook, was talking to the former Director of Intelligence, John Negroponte.  He asked Negroponte how we, the United States, would/could deal with a leak like this.  Negroponte answered that they would do their best to prevent it from happening in the first place, placing greater restrictions on the individuals that can access certain information, and then also help re-classify information that should be classified versus that which really doesn’t need to be.  Ashbrook kept pressing him on the matter, asking: “What would you do in the event of a leak?  How would you stop it?”  Negroponte kept going back to “stop it at the source.”  It was getting really annoying to keep hearing the same question over and over, when I kept repeating the answer in my head as often as Ashbrook could ask.

The correct answer?

You do nothing.

There is nothing you can do.  Once the Internet has your information, you’re done.  It’s out there and you can’t stop it.  You can shut down a server or two, but the information propagates to such a degree that you can never fully eradicate any of it.

As happens frequently, this exchange got me thinking about generational differences and their views on the Internet as a whole, specifically to what degree each generation seems to embrace the sharing of information.  [Note: I have talked about this before…]  For those of us that grew up in parallel with the Internet (i.e. it was growing as we were growing), I think the transition was easy.  We learned to live together, gradually sharing some bits of information and withholding others.  We were using the Internet before Google even existed, when all you could do is use Yahoo! to find a website that you had to manually file within their database.  There was no Facebook.  There was no YouTube.  Primarily we were takers of information rather than providers, at least until we became more comfortable contributing to this new ecosystem.

The generation(s) older than me have taken to the Internet at a slower pace (at least in terms of creating new information…), largely because they’re more cautious.  Quite a few folks from those generations are now using e-mail and Facebook, and consequently are now starting to rely on it to a greater degree than ever before.  You can still see the delay in overall adoption in things like smart phones though, where these people are just now starting to get into the mode where they think complete and total connectivity is a necessity.  This is likely because their children and grandchildren are also more accessible, so if they want to contact them, this is how they have to do it.

It’s the younger generation(s) that I’m more curious about.  These people are growing up in a world where the Internet “just exists,” much like air and gravity.  It’s a reality.  It’s something you live with and use.  I guess the difference goes back to information sharing, the older generation never really shared things and stayed more private, my generation gradually let certain things slip and get onto the Internet, and the younger generation never really learned the restraint that should be applied to certain things rather than others.  However, I imagine that these kids are much more attuned into “what should go on the Internet” and “what should not go on the Internet” than I give them credit for.  They’ve seen things happen to their friends when something gets posted that shouldn’t, likely causing them to think twice about their choices.

Personally, I’ve always held the view that whatever I post on the Internet is viewable by The World At Large.  Anything I post on Facebook (and there are quite a few politics-based links I post up there…my views are pretty clear…) can be seen by practically anyone.  Anything on this blog can be seen by absolutely anyone.  Any future job prospects that I have will likely go a quick Google search on my name and this blog will be the first thing that comes up.  They can go back almost 6 years and read all about me, my family and what I’ve been up to.  Am I proud of all of it?  Not necessarily, but I also don’t hide from it.  That information is representative of who I was and who I am today.  If you want a snapshot of Andy Linsenbardt and all he’s about, this is where to find it.  Freely available and open for all to see.

This is also how I view information in general.  Sure, we have an inclination to hide things, but more often than not, we’re trying to hide things that we’re embarrassed about.  I plan on teaching Meg and her siblings someday that the Internet is a very useful tool, but anything you post on it can be viewed at any time.  If you don’t want anyone to see a certain picture of you drinking while you’re underage, don’t put it online.  Someone will find it.  Even if you delete it, it’s saved on a server somewhere that someone can get.  Anything that could potentially embarrass you should stay far away from the Internet.  Really, though, you just shouldn’t actually do things that could potentially embarrass you someday, but that’s another matter…

No matter what generation you come from, “honesty is the best policy” still applies to you.  Everyone is entitled to secrets, but there are some things that may as well be out in the open, freely accessible, so that others know more about how and how to deal with you.  It ends up saving time in the “getting to know you” stage.  You think about better strategies when dealing with others when you know more about them.  Sure, you learn how to take advantage of them as well, but hopefully this kind of openness spreads the naivety pretty thin.

Which brings us back to the Wikileaks deal from last year.  A lot of people were concerned that this information could hurt America’s standing in the world, and hurt our relationships with other nations.  Information that the United States was hiding was perceived as something to be embarrassed about, even if, at first glance, that information was innocuous.  In the end, the complaint that this leak somehow disrupted the fabric of space-time and all is lost is moot: if you really didn’t want that information out, then you should have classified it differently.

However, the larger point is this: perhaps most of that information should have been out in the open anyway.  Much as reading this blog gives the reader some extra insight into me, perhaps a lot of that information provides extra insight into the world we inhabit and the cultures we interact with.

And I don’t see a problem with that.

The Science of Speaking Out

Ira Flatow had a group of climate scientists on his show, NPR’s Science Friday, this past week discussing the “fine line” that many scientists find themselves walking.  Philosophically, there are many in the scientific community that believe they should present the facts and allow the public to interpret them.  These scientists frequently just want to stay out of that realm of discourse, allowing the public (and, therefore, politicians) to decide how their data is used and what the best course of action is.  Largely, this is how it’s always been.  Early astronomers could tell what they knew, but had to wait for their ideas to be accepted by their respective communities.

This particular group of climate scientists, however, is getting together to move beyond the borders they have typically held themselves to, instead choosing to speak out with what they know and actually make policy recommendations based on their information.  Largely, this group adheres to what the great Carl Sagan once said:

“People are entitled to their own opinions but not their own facts.”

That is to say, these scientists are tired of presenting facts time and time again only to have them ignored and have other people’s opinions matter more than proven factual data.  To the scientific community, there is no question regarding the fact that global warming is occurring and that humans contribute to it.  In a separate (but related) issue, to the scientific community, there is no question regarding the fact that evolution is occurring and that natural selection is the most likely mechanism.  There is no question that frozen embryos are kept in that state for years and end up “dying” in a liquid nitrogen freezer when they could have been used for stem cell research rather than being discarded in a biohazard bag and incinerated.  Yet politicians, for some reason, are able to ignore these facts in their decisions of what is taught in our schools and what energy policies should be enacted and how important research could be conducted.

After listening for awhile, an individual called in and asked a question that intrigued me, and it’s one that I haven’t really considered up until now: why is it that members of Congress, and politicians in general, feel the need to question facts of science, yet do not pose the same questions toward religious beliefs?  Let us assume that all politicians turn their magnifying glass toward all information that comes across their desks (hah!).  Shouldn’t that magnifying glass analyze all information the same way, equally?  Shouldn’t they ask, “Well, this group of people used rigorous experimental techniques and verified their findings, and this other group didn’t.  Which should we believe?”

I mentioned this concept to Brooke and her attitude was, generally speaking, “That’s Just How It Is.”  This is true, but it still irks me.  I realize that this is how religious beliefs have always been.  There has always been a large enough group of individuals that are so adamant about their beliefs that, no matter what facts you give them, they will not shift policy to match.  The most recent issue of childhood vaccinations and the misconceptions about them comes to mind.  I’m not sure if this is a failing of critical thinking skills or education in general, but it’s been such a pervasive problem throughout history that I have to wonder.  Frequently, it takes at least one generation to change minds about these things, and in some cases, many generations.  I’m just afraid that, on many of these issues, we don’t have that long.

Case in point: the Catholic Church condemned Galileo‘s heretical thinking about the Earth revolving around the Sun as “vehement suspicion of heresy.”  He died in 1642 and he couldn’t be buried with his family because of it (to be fair, the Church moved his remains to their rightful place almost 100 years later).  However, the Catholic Church waited over a century before accepting heliocentrism, and until 1965 to revoke its condemnation of Galileo himself.

Scientists are getting a little annoyed with that kind of treatment.  Granted, the world moves faster today and ideas are disseminated and accepted much faster, yet Natural Selection has been a concept for over 150 years and there are still people that use the phrase “but it’s just a Theory.”  It shouldn’t take over 150 years, let alone 300 years, for ideas to be accepted when those ideas are revolutionary to our understanding of our place in the universe, and it really shouldn’t take that long for governments to make policies that use legitimate scientific data to actually preserve our place in that universe by preventing our extinction from it.  In 300 years, without any change in policy, we won’t have California or Florida anymore.  It will be too late.

The Digital Generation

I was listening to NPR’s OnPoint podcast from November 2nd, where Tom Ashbrook was interviewing Douglass Rushkoff on his “Rules for the Digital Age,” discussing Rushkoff’s new book “Program or be Programmed: Ten Commands for a Digital Age.” The discussion bounced around quite a few topics, but largely focused on the thought that people today take their digital presence for granted and that people interact with digital media in such a way that they don’t control the outcome, but instead they are controlled by their digital media.

For example, Rushkoff recounts a story from their PBS “Frontline” documentary, “Digital Nation,” where the producers ask a child: “What is Facebook for?”  The kid’s answer was “for making friends.”  It’s a relatively simple answer, and one that many adults would also provide, yet the true answer is “to make money off of the relationships, likes and dislikes of its users.”

As another example, Rushkoff says that students when we were growing up decades ago would go to the World Book or Encyclopedia Britannica in order to get a “primary source” for our book reports.  Now, for many people, simply using Google is “good enough” to find the information you want.  If you use Google’s Instant Search option, introduced a few months ago, your search results change by the second and are largely influenced by traffic on those sites, yet Google is perfectly capable of adjusting the results so that some pages show up first and others don’t.  For many users, they’re just “The Results” that they get, however the user typically doesn’t think about the vested interest that Google has, as a company, in making money off of their Search ventures.

Rushkoff’s solution, outlined in his “10 Rules,” is generally that people should be more computer literate.  He says that kids today that take a computer class in junior high or high school learn Microsoft Office.  To him, that’s not “computers,” but instead it’s “software.”  You aren’t learning how a computer works.  You aren’t learning about what programming had to go into those programs.  You aren’t learning about the types of programs available (i.e. closed-source vs open-source).  You simply accept what you are given as Gospel without critically thinking.

As I listened to the discussion, especially with regards to Google, I had to think about this past election which saw the rise of the Tea Party.  While many of them would have you believe that they were all educated, intelligent, active people, so many of them were taken advantage of by other third-party groups, primarily corporations.  These are individuals that believed what they found in Google searches without thinking critically about what they were discussing.  Rachel Maddow did an interview in Alaska discussing the Senate race of Tea Party favorite Joe Miller (who lost…), and the supporters outside were angry about all the policies that Attorney General Eric Holder had supported, and his voting record prior to becoming A.G.  Of course, Maddow points out that Holder never held public office, and thus had no voting record.  But these people believed it because that’s what they were told.  It’s what they read on the internet.  As if “The Internet” is to be equated with the Encyclopedia Britannica of old.

Rushkoff’s larger point, in my view, is that people today simply don’t have the critical thinking skills to handle what digital media has provided.  So much information is now provided with so many more sources that individuals can’t effectively wade through it and discern whether what they are reading is fact or fiction.

I’m not sure that a better understanding of computers alone would be enough to combat the problem, honestly.  Rushkoff suggests that some basic programming skills would be helpful for people to know as well, much as people thousands of years ago had to learn to write when “text” was invented.  He believes that the invention of text empowered people to write laws, to hold each other accountable, and to be more than they were.  He believes that giving everyone basic programming skills would do something similar, where they would be more likely to know and understand why a computer does what it does, and how the programs on your system interact with programs on the internet as a whole.  I barely have any programming training and I think I’ve got a relatively decent handle on how the internet works, but most of that was self-taught over nearly two decades.  I certainly don’t think it would hurt to have kids learn some basic programming, but they’re already missing the boat in various other subjects that programming is surely on the bottom of the list.

To me, it’s the critical thinking part that needs to be improved.  With some basic critical thinking skills, hopefully, people would be more informed about everything they do in their daily lives: in raising their children, in voting for elected offices, in thinking about where their food comes from, in choosing which car to drive, in where they get their information, and so on.

But hey: if people want to learn more about computers, I’m all for it.

P.S. Happy birthday, Mom.  🙂

Moderation

I was listening to OnPoint from NPR on the way home today, and their subject was about childhood obesity in the US.  The discussion vacillated from point to point, including taxes on soda, the rise of “Super Size” fast food meals, and the subsidies toward corn farmers that allows for all the high-fructose corn syrup in snack foods of children.

I was struck, however, by two callers to the program.  One of them complained about how they find it difficult, as a parent, to prevent their kids from getting high sugar snacks, as schools and day-care programs still offer them (along with fruit, veggies, etc.).  Another parent pointed out that they only allow their children to have soda “on special occasions, like parties.”

For the record, I used to drink quite a bit of soda, especially in late-high school and college.  Only after getting married (i.e. having someone to make healthy dinners for me…) did I lose the 30 lb I gained over that 7 year period, primarily by not eating Hot Pockets every day for lunch and upwards of 64 oz of soda per day anymore.  I would estimate that my Linsenbardt/Plochberger genes probably kicked in around the same time, allowing my metabolism to bring me a bit closer to my family’s general body size.

Growing up, however, I can’t say I was over-weight.  I drank soda.  Mom sent fruit snacks along in my lunch (even though those “fruit snacks” contained maybe 0.001% actual fruit…).  I ate chips.  I ate candy bars.  I ate ice cream.  And, to this day, I still do.

I think one thing those callers, and many overly-liberal parents, are missing is the “moderation” piece of the puzzle.  Denying your children soda, or making your kids eat exclusively organic food, will not solve the obesity problem amongst young people.  Preventing your children from watching more than 1 hour of television a day, or keeping them from video games, will not prevent your kids from being over-weight.  These approaches can help, but they are, by no means, a silver bullet.

My intention with Meg, and any future kids, is to try and instill a sense of moderation from the beginning.  Yes, she can drink soda.  Yes, she can have candy bars.  But will I let her down a 32 oz soda on the way to Wal-Mart and another one for the trip home?  No.  Will I send a “snack size” candy bar in her lunch, and then let her have a “king size” one for a “snack” when she gets home from school?  No.  Will she eat all the vegetables on her plate like her Dad does (even if she and he don’t like them)?  Yes, she will.  Will those vegetables be organic?  Sometimes, but it’s more important that she eats them at all, along with the rest of her “balanced diet.”  It isn’t a black-or-white issue of only eating some things and not eating any of another.  It’s the same reason Prohibition didn’t work out so well.

Maybe my opinion(s) will change over the coming years, but I guess that’s where I stand for now.  Lest she turn out like Cartman.

Edit: The USDA came out with some new info on the potential benefits of a soda tax recently.  Some of the info is summarized in the following chart, and quote:

A tax-induced 20-percent price increase on caloric sweetened beverages could cause an average reduction of 37 calories per day, or 3.8 pounds of body weight over a year, for adults and an average of 43 calories per day, or 4.5 pounds over a year, for children. Given these reductions in calorie consumption, results show an estimated decline in adult overweight prevalence (66.9 to 62.4 percent) and obesity prevalence (33.4 to 30.4 percent), as well as the child at-risk-for-overweight prevalence (32.3 to 27.0 percent) and the overweight prevalence (16.6 to 13.7 percent).

The Atlantic has another article discussing some of the proposed benefits, as mentioned in the new USDA report.

3 posts in one day???

…it must be a record! I’m just giving you something to do while Andy and I are gone for a few days. Here’s how we spent our evening:
phc