Some years ago, we started Flowics with the mission of building a social marketing platform to help brands and media enhance the way they connect with their target audiences on Social Media. Since then, weve built different solutions into our platform, to help our clients inspire conversations, curate and moderate user-generated content and amplify its reach on any other digital environment.
One of the solutions weve built is our Auto Response Agents for Twitter, a very popular tool launched exactly 2 years ago. Our technology enables clients to run Auto Response campaigns on Twitter supporting a number of different mechanics: a) Tweet to receive a personalized image (i.e. a personalized jersey of your favourite sport team with your @handle on the back), b) Tweet to receive a scheduled reminder @reply when an event is about to take place, c) Flock-to-Unlock (to receive a @reply when a certain threshold is achieved), etc.
Our platform is completely self-serve, but in certain occasions, we get requests from clients or partners, who ask for our help to set up our system for them. We have an agency services team who always helps clients in these cases. Since launch, this technology has been used by brands and media to support nearly a hundred campaigns on Twitter, helping them to engage at scale.
Today, for the first time, weve failed one of our clients: the Montreal Canadiens. We are really sad about it and we want to extend our apology to them, their fans and anybody who might have been offended by this. The @CanadiensMTL team reached 1M followers on Twitter yesterday and to celebrate this special occasion, they trusted us to set up an Auto Response Twitter campaign to deliver personalized jerseys to their fans. In this case, our agency team was asked to be in charge of the Setup and, regrettably, we did it wrong. Due to human error in the configuration, we failed to activate a filter in our product, which takes care of rejecting offensive or abusive content.
To provide a brand-safe environment to our customers, our product has a curation engine which automatically moderates content based on a number of signals and attributes of the Tweet and the profile of the Author. With this, we can guarantee our clients that they dont engage with users using profanity, posting racists comments or any other types of abuses. We take this very seriously. We couldnt detect in time that our Profanity Filter was turned off. And our clients campaign went live without this filter being active.
As a consequence of that, some users started to game the system using abusive and racists Twitter handles, resulting in these being overlaid in custom digital jerseys of the Montreal Canadiens, delivered to users as personalized auto responses.
We were able to stop these responses as soon as we got notified by our client of this unwanted behaviour, but some racist and offensive responses had already been sent out.
We sincerely apologize to the Montreal Canadiens for this failed execution. This was a human error, a configuration error that ruined an important celebration for our client.
We are already taking measures internally and revising our processes to make sure this doesnt happen again.