Professional content moderators are responsible for limiting the negative effects of online discussions on news platforms and social media. However, little is known about how they adjust to platform and company moderation strategies while viewing and dealing with uncivil comments. Using qualitative interviews (N = 18), this study examines which types of comments professional moderators classify as actionable, which (automated) strategies they use to moderate them, and how these perceptions and strategies differ between organizations, platforms, and individuals. Our results show that moderators divide content requiring intervention into clearly problematic and “gray area” comments. They (automatically) delete clear cases but use interactive or motivational moderation techniques for “gray areas.” While moderators crave more advanced technologies, they deem them incapable of addressing context-heavy comments. These findings highlight the need for nuanced regulations, emphasize the crucial role of moderators in shaping public discourse, and offer practical implications for (semi-)automated content moderation strategies.