To be clear, this doesn't seem like it invalidates anything in the original experiment.
The "rule-breaking" isn't referring to anything the researchers were doing.
It's referring to what the participants were doing. It points out that the compliant subjects who delivered the shocks weren't always following the procedure they were given perfectly. Which is, of course, expected, since people in general don't follow instructions 100% perfectly all the time, and especially not the first time they do something.
> Kaposi and Sumeghy interpret these patterns as a complete breakdown of the supposedly legitimate scientific environment. The subjects were not committing violence for the sake of an orderly memory study. With the scientific elements either forgotten or rushed, the laboratory changed into a setting for unauthorized and senseless violence.
This feels like a huge stretch. Forgetting a step at one point or reading something out loud too early isn't a "complete breakdown of the supposedly legitimate scientific environment" -- a "scientific environment" that is completely fictional to begin with.
Well, if you're supposed to administrate shocks to teach or test someone's memory, asking the question while they're screaming isn't just about protocol, it does break down the purpose of these shocks. Saying that participants did administrate shocks because they trusted the legitimacy of what they thought they were doing doesn't hold up under these circumstances.
No, because you'd have to show that the participants thought there was a breakdown of the procedure and purpose, and that they continued despite that.
If they think the procedure is to read the next question when the previous one has been completed, and they do, even if the other person is screaming, they think they're "following rules". They're not the ones who came up with the procedure.
Which is the whole point: the participants were trying to follow rules, even if they made mistakes in following those rules. The idea that there was a total "breakdown" of the rules doesn't seem supported at all.
It wasn't a properly controlled experiment to begin with, nor was it repeated. General conclusions should not be drawn from a single, flawed study. But it makes for good headlines and talking points.
This study is so flawed in so many ways that it doesn’t prove or disprove anything in any way. The most obvious thing is that the assumption that the test subjects did not realise it was fake. It was not controlled in any way and many of the subjects (presumably Yale students and so hardly complete dumb-dumbs) propably thought it was just a lark.
> By staying silent and letting the memory study fall apart, the experimenter allowed an atmosphere of illegitimate violence to flourish.
Many people are cruel. Not all people, maybe; not most people, also maybe; but some people enjoy hurting others. We see this everywhere. Isn't it possible that this kind of profile jumped on the occasion to inflict pain on people with no fear of repercussions?
In other words, isn't this study just a sort filter to triage / order students from most cruel to less cruel?
No. I highly encourage people to read his book. What you are describing is a classic example of Fundamental Attribution Error - the assumption that people’s actions are primarily the result of some innate trait, versus that of circumstance.
His study plainly shows that most people, in the right circumstances, will act in unimaginably cruel ways.
People that have been treated well are more likely to treat other people well.
If we remove this cycle of decency, what is the natural rate of humans that will hurt others?
The premise is flawed, humans learn from their environment and there's really no way to put a human in a coffin until they're 20 and see what they do then.
> The premise is flawed, humans learn from their environment and there's really no way to put a human in a coffin until they're 20 and see what they do then.
Yeah, but you can also find that rate if you remove the trigger (abuse) from the environment (society) and see how the rate changes.
You don't have to lock someone in a coffin, or something ridiculous like that (and that would be counterproductive anyway). You create a society, or a least a sub-society, where there's no abuse, and see how much abuse is invented by the people raised in that environment.
If a child is sexually abused, perhaps society would benefit from segregating the victims of abuse to prevent the cycle of abuse from continuing?
Let’s put it another way, if a catholic priest touches a choirboy, it’s not a good idea to let the choirboy become a priest and victimize the next generation of choirboys.
> as victims become perpetrators, it may be best to segregate victims to prevent future abuse and victimization
Wonderful idea. Let's not forget to segregate the poors, since they commit violent crimes at higher rates too. We can build a perfect utopia if only we just get rid of all the undesirables!
Nobody can answer that. Abuse can be low intensity, spread across large period of time or intense 1-off event and resulting damage can be similar. Spread across whole lifetimes till the point of experiment.
Extremely individual reactions, what makes one tougher breaks another completely and permanently, and everything in between.
I'd say everybody experienced some sort and level of abuse, typical school bullies (which were usually also bullied somehow, hence the behavior).
Back in the course of human evolution there must at some point have been mammals who were not yet riding on the dysfunctional cycle of violence. That means the natural rate must be non-zero, at least, or else the cycle would have no starting momentum.
Interesting. If we can assume the experimenter's failure to enforce the rules was mere clumsiness or incompetence, rather than an indicator of underlying intentional manipulation of the experimental conditions à la Stanford prison experiment, this can be interpreted in many different ways.
The (eventually) disobedient subjects were better at respecting the experimental process they were given than the "obedient" ones who went all the way to the maximum voltage. Why was that?
Could it be a sign that the disobedient subjects were on average more concentrated on the task at hand (smarter? less stressed? better educated? more conscientious?) than the ultimately obedient ones, and therefore were more likely to realise they were "hurting" the alleged learner and stop?
Or could it be that the obedient subjects were more likely to realise there was something fishy going on, suspecting the "learner" wasn't really being shocked, and thus were paying less attention to the learning rules?
Or was it, as the article suggests, that the obedient ones may have shut down emotionally under pressure to follow through, and their mistakes are the result of that?
Or were the obedient ones more likely to be actual sadists, who were enjoying the shocks so much that they didn't even care if the "learner" didn't hear their question, giving them a greater chance of shocking them again?
Unfortunately I think the Milgram experiment has become so entrenched in popular culture that there's absolutely no way it can be properly repeated to explore these questions.
Alot of the problem with these “disproven” things is over broad scope or abused in the popular media beyond comprehension.
The delayed gratification thing in particular is correlation vs. causation. It was really more about trust. Forcing kids to delay gratification is meaningless or counterproductive.
Agree. But according to Gemini [for what's worth] the final 1990 Mashmallow's study [since first versions were cautious] did indeed jump to conclusions to point there was a causation to a better later life. The media might have amplified, but the wrong (or misleading) conclusion was already present in the _scientific_ paper.
The first point, and I can see in my own life, is valid. Not properly rich by any means, but vastly surpassed any expectations and most of my peers from earlier life (which is rather easy when coming from poor eastern Europe but somehow most folks from back home didn't, too deep in their little comfort zones or fears of risks that were mostly made up).
It can be reframed as cca discipline too, willingness to suffer a bit for later rewards. Can see this as massive success multiplier in many real world situations.
The Milgram experiment also couldn't be repeated today as it was completely unethical. It caused huge psychological distress to participants to the point that some participants had seizures.
My guess is that it is the pressure to conform working in multiple ways.
The reading of questions while the subject was screaming is acting in a way that seems like that it is a performative action of conforming to the pattern and that the failure of the pattern is caused by the answerer failing to conform to the pattern. That makes the shocks a punishment for failing to conform. The questioner has a facade of doing the right thing by going through the motions, even though they are breaking the rules by doing so, because if the other party were compliant that rule wouldn't have been broken. That the shocks were painful would feel appropriate to those who had a strong sense that nonconformity should be punished. It is less them following the rules and more them assuming the intent of the rules and permitting abuse because the intent was not their decision. It might make them less willing participants to the abuse and more 'not my problem' active participants.
Without study of the internal motivations, the conclusions of the study are pure conjectures.
You are trapped in an experiment and you have the impression that things went too far and you think you can't escape? You rush it. You hear horrible noises? You just pretend you don't hear them. These are all classical mental patterns. There are million ways to explain them.
That's an interesting perspective, and it does expand how we can interpret the Milgram experiment
That said the study has been replicated many times since the original, with researchers adjusting different parameters like participant screening, changing the gender balance, or varying the roles (teacher/student, researcher/technician...) Across these variations, the overall result stays quite consistent: under certain conditions, ordinary people can be led to do harmful things.
Other experiments have also looked at which factors make this more likely, and for example, diffusing responsibility seems to be one of the most effective ones.
> Across these variations, the overall result stays quite consistent: under certain conditions, ordinary people can be led to do harmful things.
The pop culture version of what happened in those experiments is “regular people will administer potentially lethal shocks when told to”, and that claim has been refuted experimentally many times over.
Contrary to most reports, the original experimenters never told participants that the shocks are supposedly lethal or even dangerous. When participants were actually told that there was a health risk, and that they should ignore it, the vast majority of participants refused to administer the shocks in a later recreation.[1]
In other words, the Milgram experiment, as commonly understood, is somewhere between sensationalism and an outright lie.
I wonder what percentage of "obedient" teachers saw through the facade, realized that the learner wasn't a very good actor, and was just having a good time playing along with what must've seemed like some psychology professor's weird pain kink.
I have always been pretty critical about "psychology" as a field, but always kept famous successful experiments (like Milgram and the Stanford prison experiment) as examples that "sometimes it's possible to actually get interesting results".
Turns out those are not valid examples either. So I am genuinely wondering: what remains of the field of psychology, except for a group of people who find it interesting to think about how other people think/behave? Are there examples of actual, useful and valid conclusions coming from that field?
I'd think the conclusion you should draw is not that "even the famous experiments were not valid, so nothing in psychology is" but rather "the validity of an experiment does not correlate with how famous it is".
A direct conclusion. The insight I'll draw from that is that academia gives voice to the results the current zeitgeist finds interesting and believable without properly verifying the evidence.
> Are there examples of actual, useful and valid conclusions coming from that field?
In order for someone to answer this, I think you need to come up with some sort of definition what "actual", "useful" and "valid" actually means here in this context.
Lots of stuff from psychology been successfully applied to treat people in therapy with various issues, but is that "valid" enough for you? Something tells me you already know some people are being helped in therapy one way or another, yet it seems to me those might not be "useful" enough, since I don't clearly understand what would be "useful" to you if not those examples.
Psychology "knows" that people don't enter treatment until things are really bad, and then they get better - no matter what treatment is provided. Finding treatment that is better than others is the important part and they also know they are not very good at that.
> and then they get better - no matter what treatment is provided
I don't know what experience of therapy you've had in the past, but this is typically not how it works. People get better when a treatment is applied that is suitable to them as a person and the context, not sure where you'd get the whole "people get better no matter what treatment is applied", haven't been true in my experience.
I'm only reporting what I heard in my intro to psychology class years ago... Still, this is more revision to a mean applying. There are for sure treatments that are better than doing nothing, there are also treatments worse than doing nothing. But in general people tend to get better after a time. (they often get worse again in a few months, but this was not covered in class).
Intro to psychology prepares you for understanding the practice of psychology in the same way grade school arithmetic prepares you for differential geometry. It's not enough to even understand the shape of things.
The results absolutely are interesting - in fact they’re far stronger for the willingness of many to inflict violence than the original description suggested.
> While every obedient participant reliably pressed the shock lever, they regularly neglected or ruined the other steps required to justify the shock.
Procedural violations here include things like asking the question while the person in the other room was still screaming.
Those two experiments are over 50 years old. Its a bit like dismissing physics because Hubble got his constant wrong. Psychology has a lot of issues, but its also an enormous field. If your frame of reference is half a century out of date you should probably start with some encyclopedia articles.
The Hawthorne effect is real. And I don’t think we will ever get a 100% solid grip on what’s happening in others’ minds. Well, until we can actually read, understand, and interpret brain activity at the cellular level.
The ironic part is the recent fabrication controversy with Ariely. He’s recently had to retract fraudulent papers (one of them, most ironically, on the topic of honesty) because of falsified data. It makes one question the validity of all of his work.
His relationship with Jeffrey Epstein isn’t a good look either.
This one is actually interesting: The statistical difference highlights that the people who eventually quit were actually better at following the scientific protocol than those who went to the end.
And also this: The most frequent violation in obedient sessions (those who shocked till the end) involved reading the memory test questions over the simulated screams of the learner. Doing this effectively guaranteed that the learner would fail the test and receive another shock.
Basically, being willing to shock other people without stopping was more about violence itself being permitted then about being obedient person. Rule followers followed the protocol until they concluded "nope, this is too much" and stopped mistreating the victim.
The "rule-breaking" isn't referring to anything the researchers were doing.
It's referring to what the participants were doing. It points out that the compliant subjects who delivered the shocks weren't always following the procedure they were given perfectly. Which is, of course, expected, since people in general don't follow instructions 100% perfectly all the time, and especially not the first time they do something.
> Kaposi and Sumeghy interpret these patterns as a complete breakdown of the supposedly legitimate scientific environment. The subjects were not committing violence for the sake of an orderly memory study. With the scientific elements either forgotten or rushed, the laboratory changed into a setting for unauthorized and senseless violence.
This feels like a huge stretch. Forgetting a step at one point or reading something out loud too early isn't a "complete breakdown of the supposedly legitimate scientific environment" -- a "scientific environment" that is completely fictional to begin with.
If they think the procedure is to read the next question when the previous one has been completed, and they do, even if the other person is screaming, they think they're "following rules". They're not the ones who came up with the procedure.
Which is the whole point: the participants were trying to follow rules, even if they made mistakes in following those rules. The idea that there was a total "breakdown" of the rules doesn't seem supported at all.
Many people are cruel. Not all people, maybe; not most people, also maybe; but some people enjoy hurting others. We see this everywhere. Isn't it possible that this kind of profile jumped on the occasion to inflict pain on people with no fear of repercussions?
In other words, isn't this study just a sort filter to triage / order students from most cruel to less cruel?
His study plainly shows that most people, in the right circumstances, will act in unimaginably cruel ways.
If we remove this cycle of abuse, what is the natural rate of humans that will hurt others?
An uncomfortable idea, as victims become perpetrators, it may be best to segregate victims to prevent future abuse and victimization.
If we remove this cycle of decency, what is the natural rate of humans that will hurt others?
The premise is flawed, humans learn from their environment and there's really no way to put a human in a coffin until they're 20 and see what they do then.
Yeah, but you can also find that rate if you remove the trigger (abuse) from the environment (society) and see how the rate changes.
You don't have to lock someone in a coffin, or something ridiculous like that (and that would be counterproductive anyway). You create a society, or a least a sub-society, where there's no abuse, and see how much abuse is invented by the people raised in that environment.
Let’s put it another way, if a catholic priest touches a choirboy, it’s not a good idea to let the choirboy become a priest and victimize the next generation of choirboys.
Gross but perhaps a benefit to society
Wonderful idea. Let's not forget to segregate the poors, since they commit violent crimes at higher rates too. We can build a perfect utopia if only we just get rid of all the undesirables!
Extremely individual reactions, what makes one tougher breaks another completely and permanently, and everything in between.
I'd say everybody experienced some sort and level of abuse, typical school bullies (which were usually also bullied somehow, hence the behavior).
The (eventually) disobedient subjects were better at respecting the experimental process they were given than the "obedient" ones who went all the way to the maximum voltage. Why was that?
Could it be a sign that the disobedient subjects were on average more concentrated on the task at hand (smarter? less stressed? better educated? more conscientious?) than the ultimately obedient ones, and therefore were more likely to realise they were "hurting" the alleged learner and stop?
Or could it be that the obedient subjects were more likely to realise there was something fishy going on, suspecting the "learner" wasn't really being shocked, and thus were paying less attention to the learning rules?
Or was it, as the article suggests, that the obedient ones may have shut down emotionally under pressure to follow through, and their mistakes are the result of that?
Or were the obedient ones more likely to be actual sadists, who were enjoying the shocks so much that they didn't even care if the "learner" didn't hear their question, giving them a greater chance of shocking them again?
Unfortunately I think the Milgram experiment has become so entrenched in popular culture that there's absolutely no way it can be properly repeated to explore these questions.
* kids grow to be rich because they accept delayed gratification
* alpha males are the leader of the pack and all other males are useless
* people accept violence if there is a higher authority which justifies it with a reason
How many people suffered or delivered suffering because of their beliefs in the above?
The delayed gratification thing in particular is correlation vs. causation. It was really more about trust. Forcing kids to delay gratification is meaningless or counterproductive.
https://en.wikipedia.org/wiki/Goliath%27s_Curse
It can be reframed as cca discipline too, willingness to suffer a bit for later rewards. Can see this as massive success multiplier in many real world situations.
The reading of questions while the subject was screaming is acting in a way that seems like that it is a performative action of conforming to the pattern and that the failure of the pattern is caused by the answerer failing to conform to the pattern. That makes the shocks a punishment for failing to conform. The questioner has a facade of doing the right thing by going through the motions, even though they are breaking the rules by doing so, because if the other party were compliant that rule wouldn't have been broken. That the shocks were painful would feel appropriate to those who had a strong sense that nonconformity should be punished. It is less them following the rules and more them assuming the intent of the rules and permitting abuse because the intent was not their decision. It might make them less willing participants to the abuse and more 'not my problem' active participants.
You are trapped in an experiment and you have the impression that things went too far and you think you can't escape? You rush it. You hear horrible noises? You just pretend you don't hear them. These are all classical mental patterns. There are million ways to explain them.
That said the study has been replicated many times since the original, with researchers adjusting different parameters like participant screening, changing the gender balance, or varying the roles (teacher/student, researcher/technician...) Across these variations, the overall result stays quite consistent: under certain conditions, ordinary people can be led to do harmful things.
Other experiments have also looked at which factors make this more likely, and for example, diffusing responsibility seems to be one of the most effective ones.
The pop culture version of what happened in those experiments is “regular people will administer potentially lethal shocks when told to”, and that claim has been refuted experimentally many times over.
Contrary to most reports, the original experimenters never told participants that the shocks are supposedly lethal or even dangerous. When participants were actually told that there was a health risk, and that they should ignore it, the vast majority of participants refused to administer the shocks in a later recreation.[1]
In other words, the Milgram experiment, as commonly understood, is somewhere between sensationalism and an outright lie.
[1] https://www.mdpi.com/2076-0760/3/2/194
Turns out those are not valid examples either. So I am genuinely wondering: what remains of the field of psychology, except for a group of people who find it interesting to think about how other people think/behave? Are there examples of actual, useful and valid conclusions coming from that field?
See also the replication crisis.
In order for someone to answer this, I think you need to come up with some sort of definition what "actual", "useful" and "valid" actually means here in this context.
Lots of stuff from psychology been successfully applied to treat people in therapy with various issues, but is that "valid" enough for you? Something tells me you already know some people are being helped in therapy one way or another, yet it seems to me those might not be "useful" enough, since I don't clearly understand what would be "useful" to you if not those examples.
I don't know what experience of therapy you've had in the past, but this is typically not how it works. People get better when a treatment is applied that is suitable to them as a person and the context, not sure where you'd get the whole "people get better no matter what treatment is applied", haven't been true in my experience.
> While every obedient participant reliably pressed the shock lever, they regularly neglected or ruined the other steps required to justify the shock.
Procedural violations here include things like asking the question while the person in the other room was still screaming.
And based on everyone I've met, and on Dan Ariely's own actions (1), I've concluded this one is true.
We all cheat a little from time to time.
Ex : for me, driving a few km/h above the speed limit is "cheating a little"
1 : https://www.businessinsider.com/dan-ariely-duke-fraud-invest...
His relationship with Jeffrey Epstein isn’t a good look either.
And also this: The most frequent violation in obedient sessions (those who shocked till the end) involved reading the memory test questions over the simulated screams of the learner. Doing this effectively guaranteed that the learner would fail the test and receive another shock.
Basically, being willing to shock other people without stopping was more about violence itself being permitted then about being obedient person. Rule followers followed the protocol until they concluded "nope, this is too much" and stopped mistreating the victim.