When the meta mark Zuckerberg was called to testify before Congress in 2018 when Senator Orin Hatch asked him how Facebook makes money. Zuckerberg reply has since become something of a meme: “Senator, we’re running ads.”
Between July 2018 and April 2022, Meta generated at least $30.3 million in ad revenue from networks that it removed from its own platforms for participating in coordinated inauthentic behavior (CIB) data collected by WIRED show. Marguerite Franklin, Meta’s head of security, confirmed to WIRED that the company does not refund ad money if the network is down. Franklin clarified that some of the money came from advertisements that did not violate company policies but were published by the same public relations or marketing organizations that were later banned for their involvement in CIB operations.
Report from Wall Street Magazine by the end of 2021, Meta is estimated to have absorbed 17 percent money in the global advertising market and made $114 billion from advertising. At least some of the money came from ads bought by networks that violated Meta policy and that the company itself flagged and removed.
“Globally, the advertising industry is worth an estimated $400-700 billion,” said Claire Atkin, co-founder of the independent watchdog Check My Ads Institute. “It’s a big brush, but nobody knows how big the industry is. Nobody knows what’s going on inside.”
But Atkin says part of what makes information, including ads, feel legitimate on social media is the context in which it appears. “Facebook, Instagram, WhatsApp, this whole network in our internet experience is where we connect with our closest friends and family. . It’s a place on the internet where we share our deepest emotions about what’s going on in our lives,” says Atkin. “This is our trusted place to connect.”
For nearly four years, Meta has issued periodic reports that have identified networks of fake CIB accounts and pages that seek to deceive users and, in many cases, promote propaganda or misinformation in a way that looks organic and changes public opinion. These networks can be operated by governments, independent groups or public relations and marketing companies.
Last year, the company also began tackling what it called “coordinated social harmwhere the networks used real accounts as part of their information operations. Nathaniel Gleicher, Head of Security Policy at Meta, announced the changes in a blog post, noting that “threat actors are deliberately blurring the lines between genuine and non-genuine activities, making it harder to enforce in our industry.”
This change, however, demonstrates how specific the company’s criteria are for CIB, meaning that Meta may not have documented some networks at all that used other tactics. Information operations may sometimes use real accounts or be carried out on behalf of a political action committee or LLC, making it difficult to classify their behavior as “inauthentic”.
“One tactic that has been used more frequently since at least 2016 is not bots but real people who go out and post something,” says Sarah Kay Wylie, a researcher at Columbia University’s Tow Center for Digital Journalism. “CIB reports from Facebook, they sort of understand it, but it’s really hard to see.”
Russia accounted for the most ads on networks that Meta identified as CIB and subsequently removed. The United States, Ukraine, and Mexico were the most frequently targeted, although almost all of the campaigns targeting Mexico were linked to local actors. (Meta government income documents don’t break down how much a company earns by country, just by region.)
Over $22 million of the $30.3 million was spent by just seven networks, the largest of which was a $9.5 million global campaign linked to the right-wing anti-China media group behind Epoch Times.
Of the 134 paid ad campaigns Meta identified and removed, 56% targeted local audiences. Only 31 percent were targeted exclusively at a foreign audience, that is, users outside the country where the network originated. (The remaining 12% target a mixed domestic and international audience.)
Many of the largest networks Meta removed were run by public relations or marketing firms such as Archimedes group in Israel and pragmatico in Ukraine. When this happens, Meta will remove and ban every account and page associated with that firm, whether or not they are part of a specific CIB campaign, to prevent companies from selling.”recruitment misinformation” Services.
CIB campaigns and disinformation are not limited to Facebook and Instagram. Twitter, which refers to such activities as “information operations”, has identified and removed thousands of accounts on your own platform. Although the researchers identified disinformation campaigns on TikTok, company website Community Guidelines Reporting not indicate whether the platform deals with artificially forced content, and if so, how.
Wylie says the Meta reports obscure how little researchers and the public still know about what’s going on inside the company and on its platforms. In January reportMeta said that due to the growing threats against its teams, it “will prioritize law enforcement and the security of our teams over publishing our findings,” which could hurt transparency.
“Is this the tip of the iceberg? Unfortunately, I think so,” Wiley says.
“Over the past five years, we have shared information about more than 150 covert influence operations that we have removed for violating our Coordinated Inauthentic Behavior (CIB) policy. Transparency is an important tool to counter this behavior and we will continue to take action and report it publicly,” says Meta’s Gleicher.
“It’s strategic transparency,” Wylie says. “They can come out and say they’re helping researchers and fighting misinformation on their platforms, but they don’t really show the whole picture.”
Even when a campaign is closed, Atkin says it can still be useful. “They are still able to get an incredible sense of the audience,” she says. “They’ll see who clicked on [their ads]who are the suckers and then they can use this list to retarget them.”
Updated 6/23/2022 4:45 pm ET: This story has been updated to include additional meta-information provided in the post after this story was originally published, stating that a portion of the $30.3 million in ad revenue dollars was received through advertising, which did not violate its standards, but which were published by organizations participating in the CIB. The title of the article has also been updated to reflect that the ad revenue comes from fake accounts spreading a wide range of content, not just misinformation.
Credit: www.wired.com /