Thread Status:
Not open for further replies.
  1. Steerpike
    Offline

    Steerpike Felis amatus Supporter Contributor

    Joined:
    Jul 5, 2010
    Messages:
    11,085
    Likes Received:
    5,279
    Location:
    California, US

    Psychology and other soft sciences

    Discussion in 'Debate Room' started by Steerpike, Aug 28, 2015.

    I've had numerous discussions about this with people in those sciences over the years. The findings here are not particularly surprising. Which isn't to say that psychology and other such sciences aren't valuable, but you have to understand the limitations of methodology and results, and the conclusions you can draw from them (invented data is another, separate issue). Unfortunately, studies come out and the media runs with them to generate clicks, sell papers, or what have you, and many in the public assume that because it was in a study absolute certainty has been established.

    http://www.nytimes.com/2015/08/28/science/many-social-science-findings-not-as-strong-as-claimed-study-says.html?_r=0
     
  2. GingerCoffee
    Offline

    GingerCoffee Web Surfer Girl Contributor

    Joined:
    Mar 3, 2013
    Messages:
    17,605
    Likes Received:
    5,877
    Location:
    Ralph's side of the island.
    You can say that about any research.

    Methodology is key and I find the claim, 'soft science' rather than 'science' often comes from people who don't understand how the methodology can result in reliable results. Human nature is complex, but so is the weather and other chaotic systems. It doesn't make the science 'soft'.

    On the other hand, pop psychology is commonly mistaken for science.
     
    ManOrAstroMan and Ben414 like this.
  3. Steerpike
    Offline

    Steerpike Felis amatus Supporter Contributor

    Joined:
    Jul 5, 2010
    Messages:
    11,085
    Likes Received:
    5,279
    Location:
    California, US
    You've missed the point (again). My own test of the hypothesis that you read before posting replies shows it is unlikely you do so, at p<0.0000001.
     
    Aaron DC likes this.
  4. rainy_summerday
    Offline

    rainy_summerday Active Member

    Joined:
    Aug 13, 2015
    Messages:
    245
    Likes Received:
    101
    Would you then argue that methodology of natural sciences is more productive than the methodology of the humanities and social sciences?
    I am asking for clarification, because the phrasing of this sentence has made me quite curious.
     
  5. GingerCoffee
    Offline

    GingerCoffee Web Surfer Girl Contributor

    Joined:
    Mar 3, 2013
    Messages:
    17,605
    Likes Received:
    5,877
    Location:
    Ralph's side of the island.
    I didn't miss your point. You just don't like my answer.

    The media gets the science wrong be it soft or hard science. Reporters often write about research beginning with, "we now know", which is absurd. Often they are reporting on a pilot study, not even a full study. But they can be just as bad reporting on a new discovery in Martian geology as they can about a psychology studies.

    I have a pet peeve about people who think a lot of medical research is 'soft science' somehow inferior to 'hard sciences' like physics and geology.

    Are you claiming the article that psychology research overstates conclusions doesn't apply to reporters not understanding the science they report on? Or that other branches of science don't have the same problem drawing conclusions not supported by the evidence?
     
  6. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    It irks the living snot out of me when people dismiss personal experience coz "study says X." Or when people rely on studies as if it's the new religion and scientists doing studies the new order of priest that cannot be questioned.

    And then you read things like this: http://www.vox.com/2015/8/27/9216383/irreproducibility-research

    Scientists replicated 100 recent psychology experiments. More than half of them failed.
     
  7. GingerCoffee
    Offline

    GingerCoffee Web Surfer Girl Contributor

    Joined:
    Mar 3, 2013
    Messages:
    17,605
    Likes Received:
    5,877
    Location:
    Ralph's side of the island.
    Most often personal experience isn't the problem, but the causal relationship the person draws from the experience is.

    For example, I don't doubt some people experience an illness after a flu shot. The problem is when that person concludes the vaccine caused the symptoms when we know from doing placebo controlled studies, it wasn't the vaccine. People get sick around the time flu shots are given because respiratory infections unrelated to the vaccine are peaking when the vaccines are given.

    As for research results not being repeatable, that's true for a lot of research, not just psychology research. Pop psychology is unfortunately much more common that good research in psychology.

    In medicine, there is too much reliance on pilot studies before the full study is carried out.
     
  8. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    Like I said. It irks me when people rely on studies.

    Like they are somehow unquestionable bastions of the absolute truth.

    Laughable.
     
    Lemex likes this.
  9. Ben414
    Offline

    Ben414 Contributing Member Contributor

    Joined:
    Aug 1, 2013
    Messages:
    974
    Likes Received:
    785
    From the article: "Dr. John Ioannidis, a director of Stanford University’s Meta-Research Innovation Center, who once estimated that about half of published results across medicine were inflated or wrong, noted the proportion in psychology was even larger than he had thought. He said the problem could be even worse in other fields, including cell biology, economics, neuroscience, clinical medicine, and animal research."

    The article itself doesn't blame the "soft" sciences; the issue is that some (a minority of) studies don't give enough credit to their confounds and many people don't understand correlation doesn't equal causation.
     
  10. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    Are you suggesting the studies in the article I linked were pop psych and not "good research"?
     
  11. Steerpike
    Offline

    Steerpike Felis amatus Supporter Contributor

    Joined:
    Jul 5, 2010
    Messages:
    11,085
    Likes Received:
    5,279
    Location:
    California, US
    I don't know about productive, I just think you have to look at them differently. Just like you have to look and quantitative and qualitative data differently and understand the limitations. When you're dealing with quantitative data, it is much easier generally to control for variables, and the data are easier to draw concrete conclusions from. For example, if you're looking at the degree to which a compound enhances fluorescence in a biochemical test, that's pretty easy to measure and the data isn't subject to multiple interpretations if the experiments are done properly. With social sciences, things are often a bit more fuzzy, both at the testing stage and the analysis stage, and you may have multiple interpretations of the results.

    I don't know how many people get the difference, and it certainly doesn't appear that the media gets it.
     
    Sifunkle likes this.
  12. GingerCoffee
    Offline

    GingerCoffee Web Surfer Girl Contributor

    Joined:
    Mar 3, 2013
    Messages:
    17,605
    Likes Received:
    5,877
    Location:
    Ralph's side of the island.
    :confused:

    So a person drawing an unsupported conclusion based on personal experience is more reliable than an evidence supported conclusion?
     
  13. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    In the article I linked one difference was the acceptance of violence and the impact it had on rating said violence between the US and Germany. Unlike fluorescence which pretty much remains constant around the world, regardless of culture or prevailing societal attitudes.
     
  14. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    So a person's conclusion is not in fact supported by their experience?

    :confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused::confused:
     
  15. Ben414
    Offline

    Ben414 Contributing Member Contributor

    Joined:
    Aug 1, 2013
    Messages:
    974
    Likes Received:
    785
    While that seems intuitively true to some degree, the article notes the issue may be greater in fields that use more quantitative data such as cell biology, neuroscience, and clinical medicine. I'm not sure how the article's statement fits with the narrative.
     
  16. GingerCoffee
    Offline

    GingerCoffee Web Surfer Girl Contributor

    Joined:
    Mar 3, 2013
    Messages:
    17,605
    Likes Received:
    5,877
    Location:
    Ralph's side of the island.
    No, it's a sample size of one (or a couple) with an uncontrolled non-objective observation.

    It's not the experience that is the problem. It's jumping to a conclusion about what the experience means that is the problem.

    Drawing causal inferences is one way our brains manage information. It was useful for knowing to climb a tree when a lion is after you. But less useful when it leads people to believe bad behavior causes crop failures. Knowing there are inherent flaws in our thought process is how one goes from thinking bad behavior caused that crop failure to understanding what actually caused it.
     
  17. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    And now you use emotive language (jump to a conclusion) to continue to dismiss rational thinking people's ability to work stuff out. And then cite a ridiculous example where the result is not subjective (crop failure) to support your position relating to a discussion on psychology, an eminently subjective field of study.

    Thank you for the "It's a sample size of one" advice. Very useful. /sarcasm
     
  18. Steerpike
    Offline

    Steerpike Felis amatus Supporter Contributor

    Joined:
    Jul 5, 2010
    Messages:
    11,085
    Likes Received:
    5,279
    Location:
    California, US
    I think you're conflating the issues. In the post you're responding to, I'm talking about inherent differences in the types of data, measurements, etc. The article is just talking about what boils down to bad science. That's why in my post I added the qualifier that the experiments be done properly.
     
  19. Ben414
    Offline

    Ben414 Contributing Member Contributor

    Joined:
    Aug 1, 2013
    Messages:
    974
    Likes Received:
    785
    I agree if your point is that qualitative behavioral studies tend to have a high amount of confounds that should be accounted for. I'm not sure how you're defining an experiment that's done properly, though. It seems that the potential drawbacks to qualitative can be mostly eliminated if the methodology is done properly. While there could potentially probably would still be a difference between qualitative and quantitative, I would think equally proper methodology to normalize the qualitative nature and a large enough sample size to minimize randomness would make the differences quite small.
     
    Last edited: Aug 29, 2015
  20. rainy_summerday
    Offline

    rainy_summerday Active Member

    Joined:
    Aug 13, 2015
    Messages:
    245
    Likes Received:
    101
    I agree. I think I now understand what you mean.

    I wonder, though, if it is not a case of "jadedness." The humanities/social sciences know that they could be wrong. Nowadays, it is bad practice to offer "truths" in these fields. Instead, it is important to mention that your theory is simply "one more possibility."
    Natural sciences, from what I gather ( I myself am deeply rooted in the humanities, therefore I probably am biased!), prefer to state "truths." If you read scientific journals, especially medical ones, they tend to focus on how other theories are disproven by a new finding rather than how the new finding will work with other theories. It's more about who is "right" and who is "wrong." The humanities don't care about that (anymore), because they are too pessimistic for that. It's more important to state that their findings could be wrong. It's quite funny when you read students' essays. They imitate published authors: Most introduction paragraphs will include a short apology-like paragraph about the fact that their interpretation is only one possible reading.

    In a way, humanities are all too aware of how ideas are simply recycled. There is nothing completely new under the sun, but you can take what is there, give it a new shape, combine it with what you think you know. Synthesise. Literary studies is quite a funny example of that, imo. There is a rather "new" theory: Ecocriticism. Which looks at how environmental concerns are expressed in (mostly contemporary) texts (especially climate change). However, if you go back to Victorian literature, you also have texts which criticise how London is steadily becoming dirtier, grim, more industrial. There are even poems about how the River Thames is being polluted.

    We may give it a name, but that does not mean that it did not exist before we "(re)discovered" it.
    I think natural sciences and humanities/social sciences are more similar than they believe. Which is why I dislike the terms hard/soft sciences.

    In the end, all scientists blow their findings out of proportion for a living. We don't live in times where admiration alone will pay for their ways. Even in those times, artists and scientists alike had to be loud to get heard and subsequently financially supported.
     
  21. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    Money.

    I read some years ago if you weren't into string theory you weren't getting funded in whatever field that was.

    Not only blowing findings out of proportion, but even just finding what the study sponsor is hoping you find.

    Money.
     
  22. Sifunkle
    Offline

    Sifunkle Dis Member

    Joined:
    Aug 4, 2014
    Messages:
    481
    Likes Received:
    570
    This is pretty much where my thought process goes. I will admit that I've never fully understood the science involved in psychology, etc (so am not entirely sold).

    In any field, I think the ability to draw a valid conclusion from evidence largely depends on 1. how great a difference from null you're testing, 2. how variable the subject naturally is, 3. artificial variability brought about by suboptimal methodology, 4. sample size, and 5. freedom from bias (= validity). (@Aaron DC - I suspect most of these factors explain @GingerCoffee 's objection to drawing from anecdote/personal experience, if she's anything like me.)

    Numbers 2 & 3 seem particular standouts for 'soft sciences', as they deal with subjects that are hard to pin down, and data is hard to gather objectively (e.g. psychological studies relying on surveys with descriptive responses, which rely on subjects' perceptions and abilities to communicate them). I assume that soft sciences have to compensate these shortcomings by amping up sample size to achieve the necessarily high 'statistical power'.

    I'm not sure that qualitative vs quantitative data really matters. In my experience you can have good and bad versions of either, and there valid statistical methods to address both.
     
  23. Aaron DC
    Offline

    Aaron DC Contributing Member

    Joined:
    May 12, 2015
    Messages:
    2,554
    Likes Received:
    1,251
    Location:
    At my keyboard
    freedom from bias

    That'd be a fine thing.
     
    Lemex likes this.
  24. Sifunkle
    Offline

    Sifunkle Dis Member

    Joined:
    Aug 4, 2014
    Messages:
    481
    Likes Received:
    570
    Or are you just saying that because bias has done you wrong previously? :p
     
  25. rainy_summerday
    Offline

    rainy_summerday Active Member

    Joined:
    Aug 13, 2015
    Messages:
    245
    Likes Received:
    101

    You hit it spot on from my experience.

    Large studies are often carried out in order to increase the validity of the result (there is never a guarantee...).

    Descriptive responses are actually favoured in many psychological experiments, because qualitative evaluation is often used for "fine-tuning", especially in educational psychology, or for very specific hypotheses where a small pool of interviewees is not a disadvantage in any way.


    @Aaron, yes, I've heard something similar. A friend of mine is struggling a lot, because her branch of theoretical physics is not being funded either. It's a pity. I wonder how great a world it would be if the major countries had different priorities in their spending. Could be disastrous, too, though. There are things that should not be tested/built/attempted.
     
    Last edited: Aug 29, 2015
Thread Status:
Not open for further replies.

Share This Page