Why the Research behind Most Edtech Products is Pure Rubbish
Smart consumers of edtech understand that they need to consult the research before committing to any product. But the hard truth is that the research behind many edtech products may not be delivering what it is promising. Here are some factors that result in edtech product research not meeting the highest standards:
First, edtech products have an incredibly rapid life cycle. In a traditional model, research takes a very long time to plan, conduct, evaluate, and disseminate. Unfortunately, by the time that that process has taken place, a cutting-edge edtech product is likely to be out of date. This means that a lot of the research for an edtech product might have happened very quickly if the research is for a product that is still currently in use. This suggests that there were no long-term studies of outcomes. One bright spot is that the mismatch between research cycles and edtech product life cycles has led to the creation of tools that allow quick evaluation procedures, such as the EdTech RCE Coach.
Second, one of the main goals of edtech is personalized learning. And it seems like edtech tools would be the ideal product to use to deliver personalized learning. However, research data is usually presented in the aggregate, which makes it incredibly difficult to determine how well any particular tool succeeded at personalizing learning. For example, perhaps an edtech tool can improve standardized test scores by 10%. But this might mean that a substantial portion of the test subjects saw no test score improvement—they may have even seen a regression—but their poor performance is being masked by the large achievement gains of a small number of students.
Third, the realities of cost and time constraints mean that a lot of research is limited to extremely small studies. But a large sample size is essential to determining whether a particular tool is effective for a large pool of students with diverse characteristics. Without a large study sample, the results are not helpful.
Fourth, not a lot of the research meets the high standard of being chosen for publication in a peer-reviewed journal. While a tool’s website might contain what appears to be an impressive array of research results complete with visualized data, if these results are not of a quality and rigor to merit publication, there is no reason to give them great weight in decision-making.
Fifth, some tools are simply hard to test. If, for example, a math program gives students multiple options, then research studies on the tool might only apply to students who experience the tool in precisely the same way, which they may be unlikely to do.