Thursday, January 31, 2013

Tips for grad school recruiting weekends

It's the new year, which means over the next few months everyone who has applied to grad school in chemistry (and other fields... there are other fields, right?) should be receiving responses back from admissions committees. Acceptance letters generally contain salary information, program information, and an invitation to a visit weekend. In chemistry, these visits are almost always paid for by the department. Yep: free trip, food, and drinks. But why?

There are two ways to look the motivation behind visit weekends. Decide the veracity of either for yourself:

Rationalization 1: Most good chemistry departments don't want to waste their time and money.* An admitted student in a graduate program is expensive: stipend, research supplies, tuition, etc. are all paid for by the department and/or the student's adviser. So it's in everyone's best interest to make sure students are picking the appropriate school for them. Hosting a visit weekend allows prospective students to meet their possible classmates, their future advisors, and see if the grad program and facilities are appealing to them. That way there's a better chance they are happy where they go, leading to a better chance of graduation.

Rationalization 2: If you throw free stuff at students who have been paying for school, offer them money, and paint an inordinately sunny picture of your grad program, you'll get more students. The more students you get, the harder you can work students, and it won't matter as much if some leave. Cheap labor! 

Grad school visits can be fun. They differ from med school interviews, vet school interviews, and non-science grad school visitation weekends in one key way: you're already accepted. And the visit doesn't cost you anything financially.** Note the following figure, which illustrates the processes comparatively:



Overall, grad school visits are an important opportunity. Keep in mind a few things:

1. It's not an interview. You're not trying to impress the department. They're trying to convince you to make essentially minimum wage, working the equivalent of 1.5 to 2 full-time jobs for probably 6 years with minimal outside social contact or hobbies so that one person in your department can get approximately 5 more papers. Also you'll have to babysit undergrads. If you're going on a visitation weekend (except, I guess, at Scripps, which actually does hold interviews) that means you're in. You're guaranteed a spot. It's their job to impress you and convince you to attend.

That being said, don't dress like a slob; don't drink too much; don't deliberately offend anyone. Your professional career is beginning, and first impressions are kind of important. But it's not a med school interview or a corporate meeting: in grad school, getting work done is more important than showing off your suit and haircut. So relax and be yourself. Unless you happen to be an arrogant idiot.

2. Don't talk about yourself very much. Again, you're not trying to impress the admissions committee. You shouldn't be trying to impress any of the students there (they won't be impressed by you, no matter how many fifth-author Chem. Bioorg. Med. Chem. Lett. Eur. J. Int. Ed., Dalton Trans. papers you have)--it serves no point. Make the visit about listening and learning, not about boasting. You can boast after you succeed in chemistry and land a $40k job after ten years of school. Instead, observe the professors. Are they approachable? Arrogant? Distant? Humble? Listen to how the grad students talk. Do they brag? Are they competitive or easy-going? Can any of them talk about something besides chemistry? Are they trying to impress you? And very importantly, pay attention to the other prospective students on the visit. Do they all act like they have something to prove? Are they people you could get along with? Are they bragging about their seventeen fifth-author Chem. Bioorg. Med. Chem Lett. Eur. J. Int. Ed., Dalton Trans. papers? Are they fun to hang out with? (Being fun to hang out with is a legitimate concern; you'll see some of these people a lot).

3. Find the best and the worst about the program. Generally, admissions committees will carefully select which grad students host prospective students. They'll hand-pick the ones who still have their optimism; may the group that have just passed candidacy exams or who haven't had to TA in a while. Generally 2nd to 3rd year students.

It's important to listen to these students: you want to find out about the good parts of the program! But make sure you seek out the embittered, late-stage students as well. You want a spectrum of opinions. Find at least one student who doesn't really want to talk to you. Have a conversation like this:

You: "Hey, seventh-year grad student!" 
Student: "F@#* off." 
You: "Cool! What do you think of Professor Schmorey? His research is so cool! I want to be a PI! Do you like working for him?" 
Student: "F@#* off." 
You: "Awesome! Is teaching really fun or super fun?" 
Student: "F@#* off."

In short, you need to know what the possibilities are. No department is as rosy as the admissions people want you to think. Ascertain what the emotional arc will be on your grad school journey.

Also, be wary of how much or how little effort/money the school puts into recruitment. Too much effort: are they desperate for students? Too little effort: do they even care? It's a courtship. Try to distinguish genuine enthusiasm from mere marketing.

4. Ignore the stipend (but do look at cost of living). Yes, it's cool that they're going to be paying you. At least, it's cool for about four weeks, then it's depressing, because you could be managing a McDonald's and making more money than that, but that would only be a 40 hour work week, you'd get benefits, and hey, you'd get infinity fries if you wanted them.

Point is: you're not going to get rich in grad school. That's not the point.  If a school doesn't pay enough to live off of, that's a problem, but pretty much every school will. Ask grad students about it on your visit weekend. But it's not a good idea to make a "high" stipend an important part of your decision. You'll be poor either way.*** What's more important is choosing a school that will benefit you in the long run.

Also, don't ask other prospective students if they got additional fellowships or signing bonuses, or brag about yours. You'll either wind up sad or you'll annoy everyone.

5. Learn about departmental seminars. No one really says this, oddly. I guess I think it's important because I've seen schools with very bad seminars and schools with very good seminars. Seminars (sometimes called colloquiums) are when the department hosts a speaker (usually an academic scientist; occasionally an industrial chemist) who gives 1 or 2 free lectures to the department about their research. Often there's coffee, muffins, cookies, tea, etc. 

Departmental seminars can be very bad for several reasons: sometimes attendance is required; sometimes the material is boring; sometimes the speakers aren't engaging and the PowerPoints are weird and the artwork and data are definitely recycled from 1995, and who uses WordArt anyway? Usually this happens at lower-tier schools, where no money is available to pull in noteworthy scientists and so anyone within a walking radius is invited to lecture. In these cases, seminars can interfere with getting your labwork done, as you don't really gain anything from them.

But seminars can be very good. Ask current students about this! Some schools have fantastic speakers--Nobel prize winners,**** influential chemists, people whose names you've read or whose work you've learned about. These are great opportunities: you can network (important), you can hear interesting chemistry, and you can broaden your base of knowledge. In short, good invited speakers are an important, underrated factor you should consider in your decision. 

7. Don't make it all about the research. Your undergrad advisor will sit down and look you sternly in the eye. "Make sure you're choosing a school for the science," he'll say (or she'll say). That's what a lot of people will tell you: to make sure the science is something really interesting. That makes sense, doesn't it? It's a research program! You'll be doing research every day! Isn't that important?

Yes, it's obviously important. It would be absolute drudgery to be stuck for an indeterminate number of years doing a project you hated -- or worse, were ambivalent about. So it's critical to pick a program that has a couple interesting-sounding research areas. Ask students on your visit weekend about their research: do they seem excited by it? Often it's not the exact research itself but the people involved that make the work exciting--if you have an encouraging research advisor and helpful labmates but a good project you may easily end up more motivated than someone who has their lifelong dream project but a manipulative advisor and bitter labmates.

You'll probably change what you're interested in during your first semester. Additionally, very, very, very few people end up doing for their career what they do in grad school. Tenure-track positions are rare, and they almost always require one or two postdoc appointments beforehand, and they involve a change in research focus. Plus, many people go into industry or non-traditional careers completely unrelated to their dissertation topic. So don't worry about finding the 100% perfect research fit. It's vital to learn about things other than The Science on your visit. After all, you'll be living there too.

8. Meet the professors you want to work for. Some people don't think this is very important. But you're committing a lot to the program--more than half a decade. Shouldn't the person who will hold your fate in their hands at least show up to meet you?

There's a lot of professors who don't go to recruiting weekends, citing busy travel schedules. To an extent, that's understandable. But is the professor you want to work for more interested in promoting him/herself than serving as a mentor to you?

Meeting professors and groups will also show you that group webpages are deceiving. Some professors who seem crazy good on paper are kind of weird and creepy in real life. Conversely, some folks with lame webpages or research that didn't strike you as appealing are engaging and exciting to talk to.

9. Don't do homework at the hotel. It's just undergrad. Don't take it so seriously. Use the free coffeemaker! Hang out with other prospective students and current grad students! Seriously: take the opportunity to socialize on the visits, even if you're tired.

Overall, remember: in the worst case, your visit weekend will mean free food and a free trip. Unless, of course, the university loses the paperwork and doesn't reimburse you for the plane ticket, in which case you paid $600 for a trip to a weird town but didn't get to see anything except NMRs.

* Some departments do seem to want to waste money and time, but that's another matter. 
** There are exceptions, of course. Some schools don't have the budget to pay for prospective students to travel. Others only have enough money to pay for gas or meals. Some will make you arrange your own travel.
*** Obviously, extreme differences -- say $19k vs $25k for the same city -- might be worth considering. 
**** Actually, from experience: Nobel prize winners are, more often than not, terrible speakers. But it's nerd cred to hear them, I guess.

Sunday, January 27, 2013

Reading assignments, vol. 8

Links and interesting topical stories from the week follow below.

Academia

  • Scientific and writer DNLee of Scientific American gives a powerful account of the factors denying good STEM education to many students (namely, those of low socioeconomic status). It's a very important read, as it highlights many issues in science education (and education and science culture in general) that very often get ignored. 
  • At Gene Expression, Razib Khan comments on affirmative action and science. He's largely dismissive of it, saying science doesn't need cultural diversity per se (with a caveat that such diversity is valuable from a social perspective). It's a worthwhile read; only the myopic would be reluctant to admit there is a rather skewed demographic makeup in science relative to the entire population.
  • Derek Lowe has a commentary on an ACS Med. Chem. Lett. opinion piece regarding the role of academia in drug discovery. It's worth thinking about, especially to those interested in science funding or science policy. The case can really strongly be made that academia can not replace pharma as a productive drug production vehicle, but the decoupling from financial risk means academic labs can push innovations that are potentially high impact but not necessarily profitable.

Scientific representation and misrepresentation

Chemistry job market

  • Glen Ernst comments on a 1979 article from C&EN bemoaning an impending surplus of chemistry PhDs exceeding the number of available jobs. As Glen points out, "non-traditional" here meant not being a university professor. Today, the scope of "traditional" careers has broadened, but the employment outlook seems bleaker. Still, it's an interesting insight from the late 70s.
  • I found this interview of ChemDraw wizard (and recently-hired Perkin Elmer employee) Pierre Morieux by Chemjobber quite interesting. It's a neat career path, and a cool story of how social media and online networking can land you a job. At the same time, comments imply that some chemists think it is overkill (and perhaps a telltale sign of the job market) that a long PhD and a competitive postdoc do not result in a "traditional" job. (I'd caution that non-"traditional" careers aren't  necessarily fallbacks and can be more rewarding than the big-name jobs; I'd also like to point out that many people in many professions change career paths many times!). 
  • At Chemistry World, economist Paula Stephan has some perhaps-controversial, perhaps-obvious (depending who you ask) points on the PhD glut. She likens grad school to a pyramid scheme, where the focus of grad school has shifted from producing quality scientists to producing PI-promoting research. She has a series of thoughtful recommendations for improving graduate education. Derek Lowe notes the article and comments on the proposal to increase permanent lab staff (i.e. how to fund it?).
  • Chemjobber has some commentary and depressing statistics on the job market and unemployment rate for chemists (spoiler: it's worse than the average rate for bachelor's degree holders).
  • Don't miss See Arr Oh and Chemjobber's podcast on amusing interview stories.

Other

Thursday, January 24, 2013

Plagiarism and the role of publishers

Anyone who reads Just Like Cooking or Chemjobber is by now probably aware of the freshly-discovered plagiarism case wherein a 2013 Chem. Eur. J. article by Professor Xi Yan lifts, verbatim or nearly so, entire portions of a three-years-prior (2009) JACS paper by Professor ValĂ©rie Pierre. Reading the commentary, the plagiarism is pretty egregious. Additionally, comments and a further blog post by See Arr Oh have brought up more instances of plagiarism like this.

Of course, #spacedino is immediately brought to mind (for the uninitiated, that particular hashtag refers to the 2012 controversy where esteemed chemist Breslow, of Columbia University, was found to have submitted what amounted to essentially the same publication to multiple journals; it was only really caught because biologists took his lame ending joke seriously). After lengthy conversation on various media and social media outlets, the offending papers were retracted at Breslow's (and others') request (though initially he denied wrongdoing).

While the Breslow violation was aggravating and arguably wrong, the Yan violation is worse. I don't think that's a very controversial point to make, but it's an important one to recognize. The Breslow affair fell into what some categorized as an ethical grey zone; the Yan violation is in clear things-they-tell-you-from-day-one-in-university territory.

Rapid retraction is vital, I'd wager; thought many retracted papers still get heavily cited or believed, early correction is probably the best way to prevent such propagation. Once an article has been believed as true for a substantial period of time and gets included in dissertation of paper-introduction citation-vomits, less care is taken to check the original source and see the glaring retraction notice. Better to stop the train before it leaves the station.

I think this highlights the role of open and quick dialogue among scientists, along the same lines as Blog Syn. The ability to rapidly disseminate these issues when they are discovered can, ultimately, improve the quality of scientific communication. In a pre-social-media era, it was certainly easier to just assume a paper would slip by unnoticed. Not so much anymore; via Twitter, for instance, a wide audience can be quickly reached. See Arr Oh wrote that he has notified the two publishers (Wiley and ACS); it will be very interesting to see what action occurs.

Of course, a point could be made that such post-publication watchdog action shouldn't be necessary. Aren't these things peer reviewed? Alas, I think anyone with a healthy dose of realism is aware that a good bulk of reviewers put in the minimum conceivable effort in reviewing (consider the sheer quantity of bad Supplementary Information files). Usually, this is justified by a variety of factors, namely that reviewers are (1) busy and (2) unpaid. (And they aren't held accountable if the article is later retracted).

But hey, you know who are paid? Publisher staff. I'm a bit surprised that more journal editors haven't made a practice of submitting manuscripts to plagiarism detection software. There's many available; universities often have TAs use them to check undergraduate papers. One comment on Chemjobber's blog points out that Elsevier provides access to one such service, iThenticate, to its reviewers. One readily available service, eTBLAST, compares blocks of text to databases of scientific papers using the BLAST algorithms commonly employed in bioinformatics for sequence alignment. In short, many tools exist for plagiarism detection. So what's the deal, journals? A common claim of open-access opponents is that paywall publishers "add value" to papers by the publication process.

So why aren't all papers routinely submitted to these checks after peer review and before publication? Ideally, reviewers should do a thorough job vetting submitted articles and should be supplied with tools to do so. But publishers share that responsibility. If you can Cantrill an article, it shouldn't slip by.

(One caveat: I'm not sure about the journal coverage of many of the anti-plagiarism packages. eTBLAST, for instance, appears to have access to Medline and PMC, but doesn't seem to have many of the synthetic journals in full-text. That's from an initial assessment; I may be wrong. CrossCheck, the service powered by iThenticate, has a very wide list of included content, but interestingly, I don't immediately see ACS on the list). Is it possible that some of these illustrated issues just don't get picked up by the software?

Lastly, something that would be very interesting: how much scientific plagiarism occurred in the pre-software-detection era? That is, how prevalent were issues of plagiarism in the 1960s? If one took journals from the 1960s, for instance, and ran them through this software--what would they find? Is it more prevalent now? Is it less prevalent? When did scientific plagiarism mature? I suspect the issue is a very old one.



Sunday, January 20, 2013

Reading assignments, vol. 7

No weekly link roundup last week due to the #GradMentalHealth discussion, but they resume below. Some pretty interesting stuff!

Perception of science


Science writing

Research highlights

  • From Quintus at Chemistry-Blog comes an account of a synthetic analogue of the ribosome. The authors (see publication in Science) used the technique to produce milligram quantities of a peptide! This was also covered in C&EN and by See Arr Oh.
  • Derek Lowe gives a thoughtful perspective on an article summarizing the state of the field of "virtual screening". Worth a read for anyone interested in med-chem.
  • I'm not usually a fan of reading about or listening to total syntheses, but B.R.S.M. gives a pleasant account of Shair's total synthesis of (+)-hyperforin.
  • Bacterial toxins are interesting things! Check out this write-up on a PLoS Pathogens paper on toxins produced by C. difficile (nasty secondary infection common to those taking heavy loads of antibiotics). Warning to the it's-not-interesting-unless-I-can-column-it-and-solve-the-NMR-spectrum folks: the toxins in question (TcdA and TcdB) are enzymes and not small molecules. 
  • Some intriguing research relating to rapid diagnostic of bacterial infections has been highlighted at Scientific American. The authors used secondary electrospray ionization mass spectrometry (SESI-MIS) to analyze breath samples for volatile organic compounds; they could correlate MS profiles to specific infections. It's an interesting idea; given the ability of bacteria to change their metabolism quite flexibly, I'd like to see how many false positives/negatives show up in actual trials.

Other

Thursday, January 17, 2013

Blog Syn: B.S. to cut through the B.S.

Something I find very exciting happened today: the inaugural post of online synthetic peer-review community Blog Syn. Head over and check it out; remember also to subscribe to it (you'll note no exorbitant subscription fees).

The website is a rapid, international, and open source of peer-review of the synthetic literature focused especially on reproducibility. Essentially, it's a curated set of data: chemists point out interesting or provocative procedures, other chemists try the procedures and document the process and results, and then the efforts are cataloged and published for broader review and comment by the chemically-inclined portion of the internet.

The amount of "junk" in the literature is a well-established problem (if every procedure you've ever tried has worked as advertised, congrats, though). Sometimes you can get difficult literature procedures to work if you discover a variable that wasn't included in the original source (e.g. source of the material, trace impurities, level of rigor in keeping things dry, heating method, etc.) -- but then, since you're merely reproducing a literature procedure, this detail often doesn't make it into your subsequent paper ("Material X was prepared according as described by Author Z."). Moreover, since the publication timescale is slow and synthetic corrections/retractions are rare, lots of money might be wasted on trying to repeat useless procedures (counterpoint: some procedures work really well, and it's in everyone's best interest to hear of those, too!). 

Blog Syn is a really exciting idea, and I think it fills a niche neglected by the current traditional literature. I'll elaborate.

Blog Syn's closest analog is Org Syn, of course (hence the quasi-tongue-in-cheek name). Org Syn is a great journal if you ignore the clunky online interface. It's based on reproducibility. Proposals that are accepted are checked at least twice in a reviewer's lab using meticulous detail supplied by the authors about reagent quality, purification method, etc. The procedure must be reproducible to within 5% yield as written. A tall order--how many chemical steps in the literature would survive those strict guidelines? Accordingly, Org Syn has a good reputation: if you can't reproduce a procedure, it's likely an issue on your end, not on the literature side.

However, Org Syn isn't a panacea for problematic protocols. Procedures are submitted intentionally by authors; Org Syn doesn't review existing publications. Hence, there's self-selection. Authors choose to send things they know will (likely) work. Sketchy stuff is submitted elsewhere.

That's where Blog Syn has an advantage. It's democratic--any interested chemist with the time and materials can potentially contribute. It's an actual dialogue--those without access to the means to carry out experiments can still suggest techniques and comment on the data. And it's rapid. It's very rapid. Take the first post for example, which was compiled less than a month after the article was made available as an ASAP (many of the experiments were done before the article was even assigned a page number). Hence, before the broader chemical community became aware of the article, it had been vetted, including a discussion with the authors.

That's cool. 

There's been a lot of support from bloggers, including inaugural contributors B.R.S.M., Organometallica, and  Matt Katcher, project starter See Arr Oh, and chemblogging king Derek Lowe. There's been some doubt in the comments of the blogs, as well: chiefly about who verifies that the checking was done correctly, and how the whole thing will operate organizationally. I think these concerns are fairly minor hurdles; with sufficient checkers, the number of trials is way over the standard n = 1 for publication. The idea, as I understand it, is for a small group of volunteers to check a reaction that has sufficient interest and feasibility.

I think it would be helpful and important to get some PIs on board. There's two reasons: (1) a PI might become irate if they discover their student burning time and reagents behind their back (after all, reagents and NMR time cost grant money); however, a PI who gives their blessing to a student to conduct trials can contribute resources; and (2) there's a lot of chemists who still doubt that non-traditional publication venues (read: the Internet) offer anything of value; while I think Blog Syn will prove itself, having some PIs contribute might be quite transformative in shaping the face of peer review--a combination of open access and acknowledgement of social media. Attaching "established" names could mollify the perceived connotation of anonymity/sketchiness that lies with blogging in the eyes of many academics.

Anyhow, it'll be quite fascinating to see how it shapes up. Maybe it won't go much further. Or maybe it'll force authors to be more accountable for the science they preach.

Monday, January 14, 2013

Pseudoscientist calls science dogmatic (surprise)

Bad Religion, to which bad science has been compared.
Source: Flickr (available via CC license)
In a post a couple of weeks ago on the HuffingtonPost blog, Dr. Rupert Sheldrake lamented what he regards as crippling dogmas in science (I'm relieved to see it placed in HuffPost Religion and not HuffingtonPost Science). The piece is titled: Why Bad Science is like Bad Religion. For reference, a photograph of Bad Religion is shown to the right.

The HuffingtonPost piece is a diatribe devoid of evidence. Says Sheldrake:
I have been a scientist for more than 40 years, having studied at Cambridge and Harvard. I researched and taught at Cambridge University, was a research fellow of the Royal Society, and have more than 80 publications in peer-reviewed journals. I am strongly pro-science. But I am more and more convinced that that the spirit of free inquiry is being repressed within the scientific community by fear-based conformity. Institutional science is being crippled by dogmas and taboos. Increasingly expensive research is yielding diminishing returns.
He starts off relatively normal; the argument that research is driven by conformity has been explored recently by writers including John Ioannidis. It's got truth to it. Grant funding is scarce, and grant proposals must draw heavily on literature precedent (putatively to show the money will not be wasted). Though this does select against potentially very innovative projects, some avenues exist to fund "startup" ideas (see NIH Challenge Grants for instance). And conformity has some utility: it would be very, very expensive to fund every single crazy idea--unsustainably so. This is a reason the federally-funded National Center for Complementary and Alternative Medicine (NCCAM) has received criticism and proposals for defunding. (NCCAM is part of the NIH! What??).

Next the punches at scientists begin to come out:
Bad religion is arrogant, self-righteous, dogmatic and intolerant. And so is bad science. But unlike religious fundamentalists, scientific fundamentalists do not realize that their opinions are based on faith. They think they know the truth. They believe that science has already solved the fundamental questions. The details still need working out, but in principle the answers are known.
Ah, the classic "science is as based on faith as religion is" argument. That's simplistic and shallow, of course, and many authors (including, obviously, Richard Dawkins) have countered this stale line of thought. Sheldrake seems to suggest that religious fundamentalists are more self-aware than scientists are--that they're more aware of their own limitations.
Since the 19th century, materialists have promised that science will eventually explain everything in terms of physics and chemistry. Science will prove that living organisms are complex machines, nature is purposeless, and minds are nothing but brain activity. Believers are sustained by the implicit faith that scientific discoveries will justify their beliefs. The philosopher of science Karl Popper called this stance "promissory materialism" because it depends on issuing promissory notes for discoveries not yet made. Many promises have been issued, but few redeemed. Materialism is now facing a credibility crunch unimaginable in the 20th century.
Here Sheldrake assumes that by following the scientific method, you remit yourself to cold nihilism, to blank materialism devoid of any joy or meaning. That's also ridiculous. As Douglas Adams has said: "Isn't it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?"

And there is no such "credibility crunch". Scientific tools are cheaper and faster than ever before, and advances are rapid. Genome sequencing, for instance, is increasingly quick and affordable. Biology, "materialistic" as it may be, has grown from a descriptive endeavor to a broad arena utilizing information science, chemistry, systematics, physics, and more to rationalize life processes. If there is a credibility crunch, I'm missing what it is: science seems to be doing quite well.

The following is, however, the shakiest part of Sheldrake's argument:
Despite the confident claim in the late 20th century that genes and molecular biology would soon explain the nature of life, the problems of biological development remain unsolved. No one knows how plants and animals develop from fertilized eggs. Many details have been discovered, hundreds of genomes have been sequenced, but there is still no proof that life and minds can be explained by physics and chemistry alone. 
The technical triumph of the Human Genome Project led to big surprises. There are far fewer human genes than anticipated, a mere 23,000 instead of 100,000. Sea urchins have about 26,000 and rice plants 38,000. Attempts to predict characteristics such as height have shown that genes account for only about 5 percent of the variation from person to person, instead of the 80 percent expected. Unbounded confidence has given way to the "missing heritability problem." Meanwhile, investors in genomics and biotechnology have lost many billions of dollars. A recent report by the Harvard Business School on the biotechnology industry revealed that "only a tiny fraction of companies had ever made a profit" and showed how promises of breakthroughs have failed over and over again. 
Despite the brilliant technical achievements of neuroscience, like brain scanning, there is still no proof that consciousness is merely brain activity. Leading journals such as Behavioural and Brain Sciences and the Journal of Consciousness Studies publish many articles that reveal deep problems with the materialist doctrine. The philosopher David Chalmers has called the very existence of subjective experience the "hard problem." It is hard because it defies explanation in terms of mechanisms. Even if we understand how eyes and brains respond to red light, the experience of redness is not accounted for.
Here Sheldrake commits one of the biggest anti-science sins: worship of gaps. This is what anti-evolutionists say, too: "There are unexplained gaps in the fossil record!" "We don't have records of every fossil!" "Why aren't there transitional forms?" Science hasn't failed to explain biological development. We know a lot about it. And there aren't any big insurmountable walls. The picture keeps getting filled in. And the extension of heritable biology beyond simple genetics is illustrative of life's complexity: it isn't a case for adoption of magic and psychic nonsense.

Sheldrake says "there's no proof that consciousness is merely brain activity": but importantly, he neglects that there's no proof for anything supernatural.

I admit I'd never heard of Sheldrake before reading this piece, which I learned of when it was shared by a nutritionist I follow on some social media website. Who is this guy? I wondered. Should I know of him? Is this a Harvard biologist I hadn't heard of? Maybe a tenured professor at a smaller school who's big in the education field, or in science writing? The bottom of the article made him sound reputable: (but yes, they spelled it "resaerch")
Rupert Sheldrake, Ph.D., is a biologist and author of Science Set Free. He was a Fellow of Clare College, Cambridge University, where he was Director of Studies in cell biology, and was Principal Plant Physiologist at the International Crops Resaerch Institute for the Semi-Arid Tropics in Hyderabad, India. From 2005-2010 he was Director of the Perrott-Warrick Project, funded from Trinity College, Cambridge. His web site is www.sheldrake.org.
The problem is, Huffington Post made him sound like an actual real-live scientist. I guess he is indisputably an author. But his credentials, as outlined on his website, are spotty (not that non-traditional career paths are bad); it's interesting that he is crying out against dogma, yet he lists his most traditional posts in the article to establish his identity.

Sheldrake did indeed get his PhD from Cambridge and was initially regarded as a promising student. But he has, for all intents and purposes, parted ways with the scientific community. His career started out well (he was a Crick student and a fellow at Cambridge), but he ventured into the realm of magic with his 1981 publication of the book A New Science of Life: The Hypothesis of Causative Formation. Scientists quickly recognized the work as unscientific, but Sheldrake held onto his ideas and has for the last 31 years been in conflict with, you know, real biologists.

For a more thorough analysis of his career, read this account.

Is his work really that far out? Is it really unscientific? Is he being unfairly pilloried by dogma-loving, chauvinistic power-hungry capitalist traditional hegemonically dominating scientists? Is his critique of science as prejudiced and fear-driven justified?

Nah, he's way off base. His "science" consists of self-promoting, mystic, pseudoscientific nonsense that preys on the superstitions of the uninformed. From his own website, his research areas include: unexplained powers of animals; experimenter effects; morphic fields; the sense of being stared at; telepathy.

My personal favorite: "Can you wake a sleeping animal by staring at it?"

Before someone yells "continental drift!" at me: there's a big difference between someone being shunned for crazy theories without evidence and someone being shunned for theories with evidence. Science is self-correcting, and the dogmas shift over time. If there are facts and analysis to back it up. Sheldrake has not provided the extraordinary evidence (or any evidence) that extraordinary claims require.

He touts his publication record: early on he had some high-profile papers doing actual science (such as a 1968 paper in Nature) but his last few decades have been dominated by fringe-science journals like the Journal of Scientific Exploration and the Journal of the Society for Psychical Research.

Dr. Rupert Sheldrake
Source: Wikipedia/Public domain.
So is Sheldrake taken seriously by non-biologists? The problem is: he seems to be. We live in a society that loves Dr. Oz and thinks alternative medicine is an equally valid alternative to evidence-based therapy. So not suprisingly, there are a variety of fawning articles, reviews, and interviews. One piece proposes that Sheldrake may be a modern-day Isaac Newton. And several of his books have sold very well, according to Amazon stats.

As I previously mentioned, the article grossly misrepresents Sheldrake's credentials and area of "expertise". He calls himself a biologist, but that's a bit of a stretch. Sheldrake is more well-known for his parapsychology research and doesn't appear to have recently held a job that we would consider being a "biologist" for any appreciable amount of time.

Shrouding pseudoscience in science's clothing is a common tactic; shame on Huffington Post for not being clearer about who Sheldrake really is. When pseudoscience and actual science are blurred (especially by those who claim to be scientific experts), a great harm is done to the public good and the advancement of scientific thinking.

In final response to Sheldrake's points: of course there is dogma in science.* Of course there's resistance to change. But it's not in the things he's suggesting (he confuses how science works with what science says). And science, unlike religion, is self-correcting; views change over time, and they change broadly across the field. We don't have a thousand subsciences that conflict with each other, insisting the others are doomed to burn for eternity in Tetrahedron Letters.

* One such dogma is that n = 1 is sufficient for comparing the yields of two reactions. "Putting in LiCl raised the yield from 89% to 92% and the ee from 91% to 96%". Yeah, no it didn't.

Friday, January 11, 2013

Is graduate school in chemistry bad for your mental health? Part 5 (finale)

This is a bit of a late post, but make sure to check out part 5 of our graduate school mental health dialogue over at Chemjobber for some comments and final thoughts. I hope this was a helpful and/or eye-opening dialogue to readers, and I thank Chemjobber for the opportunity to discuss this topic!