• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

FTC bans fake online reviews and paid social media bots

DonkeyPunchJr

World’s Biggest Weeb

  • The Federal Trade Commission voted unanimously to ban marketers from using fake reviews, such as those generated with AI technology, and other misleading advertising practices.
  • The ban also forbids marketers from exaggerating their own influence by, for example, paying for bots to inflate their follower count.
  • With the concurrent rise of e-commerce, influencer marketing and generative AI, more advertisers are turning to automated chatbots such as ChatGPT to quickly generate user reviews for products sold online.

Wonder if this will mean the end of fake astroturfed hype like Star Wars Outlaws is using:

 
Last edited:

Topher

Gold Member
Ass of Can Whooping Ass of Can Whooping you are going to pay for what you did to Starfield, gotdangit

Super Troopers Police GIF
 
Last edited:
For this to actually be enforced, companies would have to admit to intentionally using bots. And yeah that just isn't going to happen. I doubt this will have an impact at all on astroturfing and the such.
 

ReBurn

Gold Member
Yeah good question. Not sure how it will be enforced, but I would bet this will at least make it much harder to purchase the services of some advertising firm who specializes in this kind of astroturfing marketing.
I'm thinking it won't actually make it harder because people can just bot from somewhere outside the influence of the FTC.
 

ProtoByte

Weeb Underling
I was pretty critical of the FTC's arguments against MS because they were largely inaccurate and nonsensical and have been proven wrong almost immediately. But they've been getting a few wins recently that I totally agree with.
You disagreed with the FTC out of fanboyism.

I disagreed with the FTC because antitrust is only "necessary" when the government is propping up winners, hence violating the free market. In other words, government (attempts and fails to) fix problems government creates. I have agreed with precisely 0 of their recent cases as a result.

Here, fake reviews should already be covered under any reasonable definition of fraud or false advertisement.

As for paying for bots, again, I suppose exaggerating falsifying influence should count under the definition of fraud - but for the purposes of driving traffic, I don't see how it's effectively different than paying for ad space or algorithmic prioritization. Other than taking up bandwidth on company servers. But I don't think most social media companies really care about bots, because it benefits them with advertisers if they can say "we have x(+y bots) in monthly active users.
 

EDMIX

Writes a lot, says very little
The same as how the guy from OP figured it out. You can easily tell.

Figured what out? I don't see anything in that post that actually shows any proof of anything directly tying any company to this.

So sorry man, but this whole "you can easily tell" doesn't work in a court of law.

We'd need some way to actually, LEGALLY prove the company did this, was behind this, paid for this etc.

So the question remains, how would they enforce this?

So this whole "if you find a company doing shady stuff" lets get to proving that first and go from there.

Thats like saying if EA wants Activision / MS to take a hit, they should fucking make a series of Ai bots praising Call Of Duty, oh their proof "you can easily tell" lol
 

EDMIX

Writes a lot, says very little
Then you need some glasses.
No bud, then YOU NEED some proof

I'm sorry but this whole "you can easily tell" is not fucking proof of anything, its as if you wish for how something is assumed to be viewed in a court of law as fucking fact instead of actually having real evidence of something.

So I don't really know how this would be enforced. Be like "your honor, you need some glasses" and or "wooooow you can easily tell" as your opening arguments lol

IP Addresses, communications with marketing firms, I mean something man, but this idea of just assuming, isn't how the law works. To enforce this, they actually need real actual evidence the company is the one doing it, not innuendo or assumption, because using your obviously flawed silly logic, anyone company could use bots to praise their competitors games and then report them to the FTC
 

IAmRei

Member
I hope this also drastically affects Google Play. The amount of bots giving 5 stars there is insane.
And that is the best place for bot harvest, if there is bot hunt with payment 1 cent perbot, You will still be the rich guy in the end
 

Kabelly

Member
At this point I don't trust anything I read on the internet anymore. Even what I see too. We've already lost the war against AI.
 

Griffon

Member
It is a great step in the right direction.

If you're asking about how it is enforced, I say dont worry. Once people know this is against regulations, proof will be easy to come by.
People talk.
 

Shubh_C63

Member
Its an epidemic and they use countries like India to inflate their numbers.

They randomly adds you into Whatapp/Telegram groups, Asks you to drop reviews and 5-Star for their Hotel/Spa/Restaurant/Beauty-Product where review should highlight these two aspects for 100Rs. (That's like one dollar...for 3 fake reviews). And they do this everyday with atleast 200 people!

Edit -
The UK government is tabling its long awaited digital markets, competition and consumer bill in parliament on Tuesday, which will make it illegal to pay someone to write a fake review; to offer to submit, commission or facilitate one; and to host a review without taking steps to check it is real. It follows a similar legal ban passed in Ireland last year.
The travel site says it identified and removed 1.3m fake reviews last year, with 72% caught before being posted. The removed posts included 24,521 associated with paid-review companies. Almost half of those originated from six countries: India, Russia, the US, Turkey, Italy, and Vietnam.
 
Last edited:

Fess

Member
Good idea but difficult to implement. Haven’t they seen the China work places where workers have a wall of phones posting and clicking on likes on multiple accounts? How will they prove that those aren’t real people and aren’t hired to do what they do?
 

bigdad2007

Member
For this to actually be enforced, companies would have to admit to intentionally using bots. And yeah that just isn't going to happen. I doubt this will have an impact at all on astroturfing and the such.
I mean if they make the fine big enough and offer whistleblower bounties it will actually start to have an effect. I mean these people aren’t meeting in secret shadowy back rooms. I’m sure most companies offering such services do so right out in the open for the most part.

This will drive them underground. Also big companies do take the FTC seriously. They have the power, I have used them to get bogus fees waived by comcast/att. It takes awhile but they do actually have sway.
 
Top Bottom