This study evaluates two pedagogical content analysis schemes to measure critical discourse in Massive Open Online Course (MOOC) forums, contrasting manual ratings with automatically derived linguistic and interaction indicators. The analysis finds that both methods are reliable and strongly correlated, suggesting potential for integrating machine learning techniques to enhance feedback for instructors and learners.