Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Objectivity, subjectivity, and how to evaluate scientific studies

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Archives » General Discussion (1/22-2007 thru 12/14/2010) Donate to DU
 
TZ Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 05:58 AM
Original message
Objectivity, subjectivity, and how to evaluate scientific studies
This post is not an attempt to sway anyone's belief on homeopathy, ufos, ESP etc, but merely an attempt to share what I have learned about evaluating data/evidence. Critical thinking skills are, IMHO, one of the most important tools in a scientists box and something I think more Americans need to know. And I have come to see that those who have not had the benefit of scientific background/education might not understand some of these distinctions well. So in an effort to educate/enlighten (something I try to do with my posts) I will explain certain skills that are taught. I would like to think people could use these things to better examine everything from the aforementioned topics to things like what our government claims is true, and what other politicians say/do.

First subjective evidence vs. objective evidence

Subjective evidence is basically how everyone personally observes and evaluates. Its your own personal interpretation of sensory input. Sometimes its accurate and sometimes not. I hear people say, I know what I saw..My eyes don't lie. Maybe they don't but certainly our brain is capable of misinterpreting what it sees/hears. Thats actually what happens with people who have learning disabilities like dyslexia..the brain is not interpreting the input correctly (transposting letters from correctly written words leaving text incomphrehensible to the sufferer) Sometimes we see what we expect/want to see like the example I cited in my previous post about the serial sniper and people looking for and "seeing" a white box truck instead of the real vehicle of the maroon caprice because thats what they believed was the suspect vehicle.

Objective evidence--some people claim that no one can ever be completely free of our own biases agendas, opinions, etc. That is to some extent true, but there are ways to compensate for that...Its one of the reasons why peer review is so important in the evaluation of study data. They attempt to get people who while may be in the field have no direct connections or reasons to be biased pro or anti to the person, so as to get as close to objective evaluation of methodologies and the results/conclusions drawn from those results. Thats why when people cite sources from places that don't have good objective measurements of data (and its why I don't like wikipedia as a citation for scientific evidence) I have a hard time taking it seriously.

Next: How to evaluate a scientific study. As everyone should know, just because a study has been published does not make it good science, especially if the peer review is lacking. But sometimes seriously flawed studies do get published/used (because of political agendas etc...) and it helps to understand what you are looking at before one can accept/reject a study.

Study size: Of course this makes a HUGE difference. Small studies really don't tell too much about how accurate the conclusions drawn are for the general population. Obviously a bigger sample size is better. Unfortunately study sample size is a big limitation in clinical trials.Its understandably hard to get people to volunteer to be "guinea pigs" so to speak. Despite what some think, thats the reason why a lot of drugs have issues after they are put on the market (Vioxx for example). Because even with accepted clinical trial study sizes there is still often not enough to truly anticipate all the possible side effects when its becomes widely available to the public. Thats why the FDA has an indefinite post-marketing phase for monitoring drugs because they do understand the scientific limitations of clinical trials. Vioxx was known to have potentially serious side effects yes (as many drugs do, unfortunately--nothing is 100% safe--medicine isn't an exact study there is a bunch of unpredicitability involved), but because of the natural limits on study size it was not understood how common those serious health issues were until the wide use of the drug made it evident. Thus Vioxx was pulled.

Statistical Analysis: This is not my area of strength so I won't get overly technical, but suffice it to say that there are differing ways of analyzing data, differing statistical tests and some are more appropriate than others to analyze data. I have seen people try to manipulate results this way...by using inappropriate analysis of data. Its entirely possible to have a significant results using one type of test, and have not significant results using another type of test.

Wrong conclusions drawn/what statistically significant figures mean: Sometimes just because a result shows statistical significance does not mean that we can drawn firm conclusions from it. And often times wrong conclusions (again usually if an agenda of some sort is involved- this is what happened with Avandia, IMO-eagerness to market led to ignoring or drawing the wrong conclusions of safety studies) can be made from them. In small enough study sizes a small change up or down leads to large differences in statistical significance and can sound look more legitimate then it really is (a recent study posted here had a 2X doubling of health risk in rats on something but when you looked at the actual stats the risk jumped from 0.5% to 1.0%- or something like that. Not really a huge concern in reality although indeed "statistically significant")

Anyway, thats some of the methodologies that I have learned (both formally and informally) on how to evaluate evidence. As I said too many people jump to conclusions about various issues scientific, political and otherwise IMO. So I thought I would share some of the ways I and others like me draw conclusions.
Feel free to disagree with my analysis but I think its a rational and logical way of dealing with the world around us.
Printer Friendly | Permalink |  | Top
HereSince1628 Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 07:09 AM
Response to Original message
1. Sample size and statistics are very much linked and errors in
Edited on Wed Nov-07-07 07:16 AM by HereSince1628
sample size can be a source of misinterpretation. Consequently, I'd say 3 of your issues are often capable of confoundng each other.

As an example to demonstrate this (rather than to teach the intricacies of statistics) phenomenon could the question of whether the roughness of a sidewalk is a problem... The working null hypothesis for such a study would be something like sidewalk squares in my neighborhood show no difference from the standard for sidewalk smoothness. Or a comparison could be made between sidewalks to simply see if differences exist between their roughness--Sidewalk squares from A, B, C neighborhoods show no difference.

A small sample of sidewalks will only have sensitivity to detect large scale rugosity...perhaps nearing pot hole size. So large that the event doesn't need statistics to defend that there is a problem with the sidewalk. On the otherhand a very large sample may be sensitive enough to detect the roughness caused by the wearing away of the portland cement between sand grains, that leaves the sand grains raised above the surface. This level of roughness is undoubtedly present in the sidewalk, but it is not the kind of roughness that is going to catch say a stilleto heel of a woman's shoe.


Now assuming some aspect of the problem requires a statistical decision aid to help you determine if the sidewalk contains that roughness you must design a study that doesn't miss detecting the roughness or is so sensitive it detects all roughness. Most approaches to statistical design technically require having estimates the study is actually intended to measure--in problems like the example, you would minimally need to know how much the roughness varies. That is something that could be known only if the system under study is well known from previous study or through a preliminary study (pilot study) that provides some guidance on that issue.

Different people because of their interests and experience are more or less careful with various aspects of study design. Some worry about statistical sensitivity, some worry about statistical power, some worry about inherent biases underlying sample selection,
some worry about whether the devices used to collect a data signature are actually collecting the data signature the devices are interpreted as collecting.

If you are interested in whether the roughness on the sidewalk is enough to cause tripping you must know just about what dimension of roughness causes tripping. If you are interested in the change in roughness during the lifespan of sidewalks you need to collect data on age of the sidewalk...but that could be a consequence of differences in construction methods, preparation of gravel base, use (was the sidewalk part of a driveway?), etc. etc. etc. Confounding variables abound, and to get to anything like cause-effect sorts of interpretations a whole heck of a lot of expertise must be used just to have the notion of trying to gain control of the study.

Around DU we worry a lot about push polls and such, but unintentionally bad questions or questions that yield unreliable responses in a questionairre based study can be/often are a huge problem because the statistic that is generated from the data can be generated regardless of whether the questions are good or poor.).

I probably don't need to wax long on the concept that many studies using statistics are not very well constructed.

But the rub is that as presented in the popular press, statistical analysis of political attitudes, or the role of dietary sodium in sleep disorders we usually aren't presented with enough information in a media clip to seriously evaluate the study even if we had the necessary expertise. It takes expert knowledge to really expertly deal with studies of even modest levels of signal distortion, signal disruption, and signal significance when there are known uncertainties about contributors to the data signal shape and strength.















Printer Friendly | Permalink |  | Top
 
TZ Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 07:20 AM
Response to Reply #1
2. Thanks for the clarification
Yeah, study design is really challanging. I was given a chance to do a little of it at a recent job and I have ALOT more respect for the people who do it for a living.
Thats absolutely true that its really hard to make heads or tails out of a lot of the stuff in the popular press because of the small amounts of detail we are given.
BTW, if I could recommend an individual post, I would recommend yours.:thumbsup:
Printer Friendly | Permalink |  | Top
 
TheDoorbellRang Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 07:32 AM
Response to Original message
3. A few years ago I read a book called
"The Emperor of Scent." It was the story of a man who had developed a new theory of how the olfactory sense works. The main thing I remember from it was the idea that because he was versed in multiple scientific disciplines, he was able to come up with the theory, which was still in dispute among the scientific community. The author spoke of the compartmentalization of scientific inquiry, and how this trend can limit what reseach can discover. What's your take on this?

Another question: do you think there may be problems with inherent bias in certain studies, especially social studies? I'm thinking in particular of seeing the "five races of man" in museums when I was a kid, then the undercurrent of discomfort when Leakey theorized that all of mankind evolved in Africa (eg, OMG, we're all black!1!1!), to the current genome project that pretty much confirms what Leakey thought. I've often thought that's the main objection to evolution, in the fundie's heart of hearts.

Third -- the "woo woo" question, regarding esp, ghosts, etc. Current scientific methodology has not been able to confirm the existence of these phenomena, but IMHO I don't think they should then be dismissed out of hand. Rather, I'd like to see some clever researcher come up with a viable method to measure these -- it just hasn't happened yet. In the meantime, I do think there's enough anecdotal evidence to suggest such phenomena should be studied, but don't expect to see it anytime soon.
Printer Friendly | Permalink |  | Top
 
TZ Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 08:14 AM
Response to Reply #3
4. I am not familiar with that book
Sounds interesting. There is probably a grain of truth to all that. Alot of good scientists are able to synthesize different disciplines to make some important discoveries. For example a recent discovery has shown that protein folding and structure has some similarities to the way dark matter behaves ( here's the link for those interested:http://www.sciencedaily.com/releases/2007/07/070719164531.htm)
So a synthesis of physics and biology may be useful in learning how to design better drugs.
I don't think this is a HUGE issue though, not as much as some make it out to be. There are many creative thinkers out there
Inherent bias is a HUGE problem in social studies, I think. Anybody who does studies linking things like intelligence with race,gender, religious beliefs ect, has this issue..I think that most of these researchers have subconcious biases pro or anti and design studies with their inherent biases in tact. I am VERY suspicious of that kind of research, to be honest.
And yeah, in a perfect world some of the unexplained phenomena should have more research dedicated to them, however because funding is so limited in some ways (you should see some of the battles that go on in the sci community over funding--Yikes!) thats probably not happening anytime soon.
Printer Friendly | Permalink |  | Top
 
TheDoorbellRang Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 08:17 PM
Response to Reply #4
6. Thanks for the input
Much appreciated. :hi:
Printer Friendly | Permalink |  | Top
 
HereSince1628 Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 08:34 AM
Response to Reply #3
5. Your 3 taken up in order...
Edited on Wed Nov-07-07 08:38 AM by HereSince1628
Novel combinations of information often yield interesting if not radical new insights and thereby better understanding. Mathematically speaking, the number of combinations produced in a discipline depends upon the number of things that can be considered in combination. Not surprisingly this increases as an exponential function of the number of things available to combine and the number of items in each combination. Moving approaches and information from one field to another makes more possibilities for combinations available. That's the good thing...BUT...in creating more possible combinations it also creates combinations that aren't meaningful and can be confounding because of spurious correlations or the way that they twist the logic involved in interpreting the combination.

Many tools used in data collection are (predictably) biased/limited in their sampling. Many investigators are inherently and unintentionally biased in their use and interpretation. In general, to be meaningful information must be able to be viewed in the context of prevailing knowledge/paradigms. Big shifts in thinking are rare, and are often resisted initially because they don't conform to a prevailing paradigm. Scientists are people, scientific understanding is a product of people and is loaded with all the short-comings that people engage in even when every scientist knows the ideal is to eliminate sources of those problems.

The notion of non-corporal entities isn't only an issue of methodology. It's also an issue of the currrent state of understanding, and the limiting principles of science. I'd say that historic debunkings of evidence put forward for ghosts (ectoplasm, table taps, etc,) don't do the notion of spirits much good. As you might expect conceptualizations of ghosts almost by definition defy rules of demonstrably knowable beings/things. Thus they are left out of any acceptable paradigm from which meaningful empirical explorations can be launched. That is why no researcher has yet found a 'viable method to measure' them. Being outside the empirical realm for existing practical purposes puts them outside of science. That doesn't mean that scientists are closed to the notion that things outside of current empirical understanding exist. Bacteria and Viruses are excellent examples of things that existed for eons but came into the realm of empirical investigation only very recently as principles and practices progressed.








Printer Friendly | Permalink |  | Top
 
TheDoorbellRang Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-07-07 08:29 PM
Response to Reply #5
7. I'm glad you and turtlensue are here.
Now when I trip across one of the studies here that needs clarification or debunking, I'll know who to contact. Thanks! :hi:
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Mon May 13th 2024, 04:03 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Archives » General Discussion (1/22-2007 thru 12/14/2010) Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC