This position paper addresses the challenges of hate speech, terrorist propaganda, and disinformation in the digital age, proposing artificial intelligence (AI) as a potential tool for content identification and moderation. It emphasizes that AI is not a one-size-fits-all solution but rather encompasses a variety of technologies that influence online content ranking and recommendations, raising concerns about freedom of expression and access to information. The paper is structured into two main parts: the first examines the use of automated systems for content moderation, while the second explores the implications of recommendation algorithms in amplifying harmful content, ultimately calling for new safeguards for freedom of expression in the context of automated speech governance.