One of my internships at a psych practice involved collecting/finding good research that validates the notion that monitoring heart rate, respiratory rate etc can help people with panic disorder become more mindful about how their symptoms effect their bodies.
I distinctly remember presenting 3 or 4 legitimate case studies that all got thrown away by my site supervisor due to confirmation bias, and some of them not having data that doesn't account for context in the experiment group. It was harsh, but it but it was a lesson in how the scientific method is used to acquire validating data, and it was a lesson in improper data collection methodology. It was humbling, and frankly embarassing to my 22 year old self who really thought I was getting it lol, but it totally reshaped my understanding of how data is collected. Context matters, if it statistics are going to be used to support an argument, error needs to be measured, and accounted for and the context need to be accounted for. Data in itself does not support an argument.
That said, picking numbers to support an argument would make sense on a superficial level, and it's easy to make that error. I sure as hell did.
Sent from my SM-G975U using Tapatalk