http://www.vox.com/cards/savvy-science-reader/scientific-evidence-intro-card
# 6 is probably the most important. Most peer-reviewed journals don't make it mandatory to report the effect size between two variables. This makes it difficult to determine the strength of the relationship between two variables. Studies can show that there is a relationship between two variables, but it does not say the strength of these variables. More and more peer-reviewed journals are requiring researchers to report effect size.
Not to sound like an asshole, but all of those 8 points I learned in 11th grade statistics. The issue is that statisticians spend DECADES studying those 8 points. What this means is the average layperson just can't come and start critiquing research based on those 8 points lol. It's way more complicated than that. It may be a good starting off point, but in the long run, youre going to need to develop a much more in-depth understanding of statistics (ANOVAs, regression analyses, MANOVAs, chi-squares, t-tests, etc) to truly be able to critique research.
Not to mention that ALL journals require research articles to spend a certain amount of time discussing the limitations in their study. In the conclusion section of journal articles, you will always find researchers discussing the "LIMITATIONS" and need for "FUTURE RESEARCH" regarding the topic.