This paper explores issues affecting the quality of annotations in crowdsourcing tasks used in natural language processing. By combining a literature review with empirical data analysis from workers' forums, it identifies key problems across task design, operation, and evaluation phases. While literature highlights various issues, including unfair rejections and late payments, the data analysis reveals that poor task design, such as malfunctioning environments and privacy violations, significantly impacts workers' experiences. The findings suggest areas for future research to enhance crowdsourcing processes.