October 10, 2024 | Memo
Nip the Bots in the Bud
Proactively Taking Down and Preventing the Creation of Inauthentic Social Media Entities
October 10, 2024 | Memo
Nip the Bots in the Bud
Proactively Taking Down and Preventing the Creation of Inauthentic Social Media Entities
A years-long Chinese influence operation known as Spamouflage has persisted on social media despite numerous attempts by various platforms to remove it.1 The operation’s endurance can be explained to a considerable extent by its exploitation of vast networks of pre-positioned, inauthentic social media entities lying ready for Spamouflage and other malicious actors to deploy.2
The Foundation for Defense of Democracies (FDD) has uncovered more than 25,000 inauthentic Facebook pages with names identical or similar to previously identified Spamouflage entities. While these pages do not appear to be currently engaged in influence operations, several are conducting ongoing financial scams. Taking down these pages (and other similar networks) would not only help to restrain state-backed influence operations like Spamouflage but also to thwart other malicious activity on social media platforms.
Meta and other major social media companies must significantly strengthen identity verification to make it more difficult to create networks of inauthentic entities on social media platforms in the first place. Platforms should implement solutions that preserve privacy while verifying the identity of persons creating new entities.
By nipping the bots in the bud, the community of stakeholders seeking to combat foreign influence operations, scams, and other forms of malicious online activity can move beyond playing whack-a-mole and instead get one step ahead of adversaries.
How Did FDD Discover 25,000 Inauthentic Pages?
FDD previously uncovered 450 Spamouflage entities (i.e., profiles and pages) on Facebook in a network we named the ‘War of Somethings.’3 FDD exposed this network in March 2024 and reported it to Meta before publishing its findings. After FDD shared this network with Meta, Meta purged 162 of the 450 Spamouflage entities. ‘Entities’ is a general term that Meta uses in its own policies referring to social media pages, user profiles, groups, and events.
The operators of the ‘War of Somethings’ network used fabricated personas and hacked accounts to draw attention to perceived U.S. domestic policy failures ranging from gun violence to drug deaths. The network also criticized U.S. foreign policy, with a particular focus on the Israel-Hamas war, and discussed the 2024 U.S. elections.
Even after Meta took down some of the FDD-reported entities, searches on Facebook for pages with identical names revealed thousands of entities with either identical names or permutations of the original names.4
Take, for example, the Facebook profile “Denise Jones.” When researching the ‘War of Somethings’ network, FDD observed “Denise Jones” sharing both English-language content highly critical of the U.S. government and Chinese-language content promoting China with hashtags such as #AmazingBeijing. Based on a variety of distinctive behaviors exhibited by this profile — such as posting largely during Chinese working hours and sharing content almost exclusively from entities associated with Spamouflage — FDD assessed with high confidence that “Denise Jones” was part of a Spamouflage campaign.
A later search by FDD for the name “Denise Jones” generated results that included more than 124 inauthentic pages, all of which were either named “Denise Jones” or included “Denise Jones” in their name. In some cases, the indicator of artificiality was an AI-generated profile photo, as indicated by the presence of surreal images on or around the face. (See Figures 1-2.)5 Other cases involved incongruities among the page’s name, photos, description, and behavior. For example, one page named “Denise Jones (Houston)” lists itself as an advertising agency in Houston but uses a sexually suggestive profile photo and primarily shares Vietnamese-language content.6
Patterns across groups of pages strongly suggested someone created the accounts in batches. Of the 124 inauthentic “Denise Jones” pages, 19 shared the name “Denise Jones Online” followed by a five-digit number. (See Figure 3.) Many “Denise Jones” pages also included identical page descriptions. For example, 18 of them use the tagline “Hi there, want more beer?” in their description.
To find other inauthentic pages, FDD conducted searches on Facebook for the names of entities previously identified as part of Spamouflage operations not only by FDD but also by the BBC, Graphika, the Australian Strategic Policy Institute, Mandiant, and Microsoft.7
With this process, FDD identified 212 Spamouflage entities with associated groups of inauthentic pages. These entities are listed in Appendix A.8
After identifying these 212 entities, FDD pulled Facebook page search results associated with each entity. This process identified 50,000 Facebook pages in total, including both authentic and inauthentic pages. Looking for telltale indicators of inauthenticity, FDD identified over 25,000 inauthentic pages with names identical or similar to previously identified Spamouflage entities. Appendix B includes a selection of 22 potential indicators of inauthenticity to illustrate what attributes FDD used to determine a page’s status.
What Are These 25,000 Pages Doing?
None of the 25,000 inauthentic pages that FDD identified appears to be currently engaged in influence operations. Many of these pages, in fact, appear not to be engaged in any activity at all.
FDD did, however, identify several pages promoting financial scams. For example, one page named “Agent James Smith” claimed to be distributing up to $7 million in gifts and donations on behalf of FedEx. (See Figure 4.) The page posted photos of FedEx trucks and people receiving jumbo checks from Publishers Clearing House sweepstakes.9 Pages named “AGENT SMITH JAMES,” “Agent Peggy Ashley Wilson,” “AGENT JANE ALICE,” and others promoted similar scams.10
This scam activity suggests that whoever created these vast networks of inauthentic pages did not do so expressly for influence operations. Rather, the operators of Spamouflage likely purchased or appropriated pages from these networks to save the effort of creating their own.
Who Created These Pages and How?
Many of the 25,000 inauthentic pages display consistencies in naming conventions, page descriptions, and images. For example, the search results for more than 87 of the 212 distinct Spamouflage-related entities included at least one result with the tagline “Hi there, want more beer?” in the page’s description. Thus, in addition to being connected to previously identified Spamouflage accounts, many of the 25,000 inauthentic pages appear to be connected to each other because of their shared features.
Other common features that extended across the inauthentic pages include taglines such as “Vinicius Junior” and “Hello World.” Several pages also include the word “Agent” affixed before the page’s name, often with a photo of a government official as a profile image. Another pattern involves 15-digit numerical strings appended to the end of the page’s name, and another includes exactly four random emojis in the page’s description. Appendix B outlines these patterns and others in more detail.
These patterns demonstrate that whoever created these pages used automation, likely involving tools provided by the black hat marketing website Fewfeed.11 (‘Black hat’ is a term for deceptive and/or manipulative approaches to marketing.12) The connection to Fewfeed was plainly visible thanks to page descriptions that read “This page was generated by fewfeed v2” alongside four random emojis.
Fewfeed has a tool that allows users to automatically create Facebook pages with identical descriptions and assign up to 500 random names to these pages.13 The tool has a default setting to create pages with the description “This page was generated by fewfeed v2.” It also provides the option to add random emojis at the end of the page description. The tool could also have created other observed clusters of pages with identical descriptions, of which several have random strings of emojis added to them. (See Appendix B).
From the available evidence, one cannot know whether Spamouflage operators used tools like Fewfeed’s automatic page generator, or whether Spamouflage purchased the pages from a separate creator. Indeed, black hat marketers regularly advertise Facebook entities for sale on messaging apps such as Telegram and in fora such as Blackhatworld[.]com.14 (See Figures 6 and 7.) The ability of Spamouflage’s operators to purchase pre-positioned inauthentic entities may explain the operation’s persistence despite multiple takedowns by Facebook and other social media platforms.15
Purchasing entities from a third party saves time since building new pages would require creating unique emails and solving CAPTCHA puzzles, which seek to deter automated bots from creating entities by generating challenges that, ideally, humans but not computers can pass.16 Purchasing entities from a third party may also save money since it may eliminate the need to procure phones and SIM cards. Though Facebook usually requires only an email address to create an account, Spamouflage’s operators may need phone numbers to set up or verify email accounts, depending on the email service.
Recommendations: Enhance Takedowns and Prevent the Creation of Inauthentic Entities
Meta reports that it detects and removes over a billion fake entities each quarter, relying on a combination of technology and human review teams with more than 15,000 contractors.17 While this effort is laudable, there are still vast quantities of fake entities that enable influence operations, scams, and other illicit activity on Facebook. Meta can do more.
This investigation focuses exclusively on Facebook. That said, Facebook is not unique in the way that state actors are exploiting it.18 Other social media companies should also take similar action to protect the integrity of their platforms.
1. Strengthen identity verification for account creation while preserving privacy.
Meta currently requires a phone number or email to create an account. Malicious actors can meet these requirements by acquiring fresh phone numbers or emails. For example, they can purchase a prepaid SIM card at a gas station or simply create a new email address. Meta also sometimes requires current users or users trying to create an account to provide a so-called “video selfie” to verify their identity.19 It is unclear what conditions trigger this requirement.
Requiring a video selfie may feel overly invasive to Facebook users. Moreover, malicious actors may be able to bypass video selfie requirements with AI face swap tools or AI tools that generate videos from still images of faces.
Meta and other social media companies must accordingly strike a balance between verification and privacy. Accordingly, social media companies should explore solutions such as zero-knowledge proofs for identity, which enable people to prove their knowledge about certain information without disclosing the information itself. In the case of identity verification, people could use zero-knowledge proofs to verify aspects of their identity (e.g., citizenship, age, etc.) without disclosing it.
Blockchain technology can also help. Early cryptocurrency researchers focused on blockchain applications for information security.20 A blockchain-based account verification solution for social media could disincentivize undesirable behavior, such as the mass creation of inauthentic identities, by imposing a cost or by creating a reward for alternative behaviors.
2. Assess existing enforcement measures and monitor the market for inauthentic entities.
The 25,000 fake entities that FDD discovered clearly violate Meta’s Account Integrity and Authentic Identity policies, which Meta extends beyond user accounts to “other entities (such as pages groups and events).”21 The persistence of the networks identified by FDD indicates that there are gaps in Meta’s policy enforcement.
It is possible, for example, that Facebook did not detect the 25,000 entities because they engaged in very limited activity. Meta’s reporting on its enforcement actions specifies that Meta removes pages “if the name, description or cover photo of a Page or group violates our Community Standards.”22 This means that Meta should remove inauthentic pages — even those that have not yet posted any content — based on the page name and description alone.
To address these gaps in enforcement, Meta and other social media companies should undertake regular red team exercises in which the red team creates networks of inauthentic entities to gauge how long it takes social media companies to identify and respond to them. When inauthentic entities go undetected for a long period of time, the red team can report this gap in enforcement.
Red teams can also test the strength of social media companies’ identity verification and account creation processes. For example, red teams can attempt to create accounts using various email service providers to determine which providers have safeguards that prevent new or little-used email accounts from creating social media entities. Red teams can uncover which email service providers prevent this abuse and which do not so that social media companies can apply further scrutiny to accounts created using addresses from high-risk email service providers.
In addition to conducting red team exercises, social media companies should adopt proactive threat intelligence practices to help prevent the creation of inauthentic entities. In the field of cybersecurity, certain threat intelligence communities infiltrate hacker fora and procure services to gain proactive intelligence. The NATO Strategic Communications Center of Excellence (NATO STRATCOM) has performed similar work in the social media realm. For a series of studies, NATO purchased inauthentic interactions from commercial media manipulation service providers. These studies measured the amplification of benign content on a small scale to reveal weaknesses in social media companies’ ability to detect and respond to platform manipulation.23
Social media companies can produce proactive threat intelligence by infiltrating and monitoring black hat marketing fora. Social media companies can even anonymously pay for services to observe the behavior of threat actors. As demonstrated in this paper, the same assets can be and are often used to fuel a wide array of malicious social media activity — be it financial scams or influence operations. Understanding more about how people create, sell, and use these assets will help social media companies detect and prevent this wide range of malicious social media activity.
3. Host “clean up” events where independent researchers actively search for inauthentic entities. Share intelligence with other platforms.
To supplement ongoing takedowns of inauthentic entities, social media companies should host regular events where independent researchers and non-profits can actively search for inauthentic entities as a form of public service. Social media companies should ensure these researchers have access to the data necessary to identify as many entities as quickly and easily as possible.
Malicious networks can also operate across online platforms, be they mainstream social media platforms such as Facebook and X, media-sharing platforms such as YouTube, discussion platforms such as Reddit, or others. For example, researchers have documented various Spamouflage networks operating across over 50 online platforms.24 Therefore, social media companies should also expand partnerships with each other to help them better combat cross-platform networks of malicious online activity.
Conclusion
By undertaking these recommendations, Meta and other social media platforms can proactively take down and prevent the creation of inauthentic entities in a more effective and efficient fashion. This will make it more difficult for U.S. adversaries to launch influence operations on social media platforms in the first place. Going after the underlying ‘infrastructure of influence’ on social media — be it networks of pre-created inauthentic entities, or the black hat marketing tools and platform policy loopholes that enable malicious actors to create them en masse — will certainly help stem the tide of the seemingly constant stream of malign influence targeting the United States and its allies.
Appendix A: Previously Identified Spamouflage Entities
The list below presents Spamouflage entities previously identified by FDD or other researchers, specifically, those that had associated groups of inauthentic entities.
Entities identified as part of the “War of Somethings” network discovered by FDD25
- Isobel Caldwell
- Amelia Murray
- Grace Fisher
- Martha Coles
- Phoebe Smith
- Emily Walsh
- Jared Dolan
- Cecil Walsh
- Heather Robinson
- Sara Jones
- Callie Austin
- Jim Davies
- Raj Shope
- Isabella Fraser
- Santosh Kumar
- Lakshman Kumar
- Janice Collings
- Jibon Shathi
- Zach Simpson
- Lacey Anderson
- Jennifer Johnson
- Dustu Chele
- Tyler Tara
- Claretta Whittaker
- Agus Tina
- Denise Jones
- Merrie
- Erix Boy
- Nakata
- Astri
- Jim Davies
- Sergey Ponomarev
- Amber Jennings
- Juana Gutierrez
- Sabrina Brewer
- Ekaterina Nikitina
- Angie King
- Shashi Kanth
- Dwayne Bartholemew
- Selina Chris
- Antonio Holcombe
- Vickie Frazier
- Brendan Askew
- Chan Emma
- Diana Lin
- Yu Ting Gao
- Ranjan Mandal
- Emily Walsh
- Sara Jones
- Callie Austin
- Emely Ellison
- Eve Parry
- Meghan Bailey
- Lorena Braz Paiva
- Sophia Evan
- Andrea Acosta
- Elane Shay
- Phoebe Smith
- Richard Kyle
- Aulia Jr.
- Shaun Reece
- Felipe Teixeira
- Todd Chen
- Lauran Bosley
- Mary Joesy
- Md Asadul
- Kate Kane
- Ryan Jai Secend Jai
- Jose Avery
- Montgomery Sandy
- Rita Roy
- Patricia Lin
- Lauren Moulton
- Emery Miller
- Santos Abraham
- Susan Byrne
- Jeffrey Mendez
- Doreen Buchanan
- Derek Frye
- Tracie Bowden
- Donald Rosales
- Kim Nixon
- Anthony Duffy
- Emanuel Kohler
- Stefan Valdes
- Jan Hampton
- Donovan Schultz
- Marth Robledo
- Giovanni Barth
- Dennise Sheppard
- Jensen Smith
- Minh Triết
- Chandudas Chandudas
- Christina Lopez
- Margurite Ortega
- Jennie Kennedy
- Lee Sorensen
- Alica Ellis
- Kim Jack
- Christoper Grier
- Marco Davila
- Henry Lyle
- Leticia Madsen
- Kelly Ferrari
- Guillermo Davis
- Maryln Walters
- Bryant Beard
- Maximo Gentry
- Reginia Tavares
- Morgan Dees
- Morosos Escuintila
- Julio Foley
- Eve Bateman
Entities identified as part of BBC reporting26
- Logan Lee
- Merle Thompson
- Elena Johnson
- Md Joy Jr.
Entities identified as part of Graphika reports27
- Michael Welsh
- Lida Wong
- Helena Hart
- Regina Montgomery
- Malcolm Daly
- Yiyi Chen
- Li Sandy
- Peng Lei
- Dali Le
- Polina Novikova
- Oksana Rodionova
- Irina Ivanova
- Nhu Vu
- Tuong Cuc
- Kay Payne
- Alicia Carpenter
- Jean Miles
- Lois Peters
- Sandra Barnes
- Mae Murray
- Jessica Harvey
- He Jingrun
- Francis W
- Victor Chan
- Spencer Taggart
- Charles Carter
- Robert Collins
- Isaac Jones
- Daniel Evans
- Christopher Garcia
- Richard Turner
- Joseph Williams
- Steven Turner
- Kyler Lewis
- Kimberly Miller
- Anna Conner
- Kelly Bell
- Paula Smith
- Hannah Marie
- Angie Evans
Entities identified by the Australian Strategic Policy Institute28
- Mike Ahmed
- Lori Gutierrez
- Yui Aragaki
- Renea
- Nora Gonzalez
- Jeanette Torres
- Jackie Garcia
- Sonya Bell
Entities identified by Microsoft Threat Analysis Center29
- James Maddison
- Carla Walker
- Donald Marshall
- Brett Ferguson
- Anthony Robert
- Jiang Peter
- Peter Douglas
- Nick Wells
- Fred Valerie
- Edward Allen
- George Gonzalez
- Steven Baker
- Robert Hall
- Beverly Jiminez
- Alice Bell
- Charlotte Walker
- Scarlette King
- Charles Brown
- Patti May
- Ken Howell
- Lee Conway
- Virginia Gay
- Eric Rutherford
- John Osborne
- Milagros Geraldo
- Honor Carter
- Roman Peter
- Abigail Dennis
- Jim Green
Entities identified by Mandiant30
- Karen Diaz
- Kimberly Allen
- Nikki Brown
- Yasmine Kelly
- Jackie Eberhart
- Jessica Smith
- Teresa Brown
- Gladys Wright
- Jennifer Foster
- Tara Garcia
- Heather Parks
- Anita Bell
- Ashley Wilson
- Brown Emily
- Gonzales Bonnie
- Tamara Smith
- Jane Alice
- Betty Li
Appendix B: Patterns Associated with Inauthentic Pages Identified by FDD
This appendix provides an overview of significant patterns observed within the 25,000 inauthentic pages identified by FDD. It offers high-level descriptions of each pattern, with a screenshot framing five examples of each. The patterns not only indicate inauthenticity but also suggest batch creation — that is, that the pages were created en masse and likely with automation. These pages generally have very limited or no activity and followers.
1. AI-generated “deepfake” profile photos
Numerous influence operations and other forms of malicious social media activity employ AI-generated “deepfake” profile photos. FDD identified many pages with such photos. The pages shown below provide a small sample.
To demonstrate that these profile photos are all deepfakes, FDD superimposed semi-transparent versions of each profile photo upon each other. This demonstrates that the eyes of each profile photo are centralized and aligned with each other, a textbook indicator of deepfake faces.31
2. Combination Names
FDD identified a pattern combining multiple names into strings of three or four names. Many of these combinations appear highly improbable, even combining names typically associated with different genders or different regions. For example, “Heather Robinson Deividas Šimkus,” seen below, combines a female name of English-language origin with a male name of Lithuanian origin. Many of the inauthentic pages with combination names reiterate their names in the page description, as seen in the sample below.
3. “Hi there, want more beer?” Tagline
One of the first patterns FDD identified includes the tagline “Hi There, Want more beer?” as the page description. FDD identified this tagline on thousands of pages. It was also mentioned in a Facebook Scam Reporting Group as part of a scammer’s pages.32
4. “Vinicius Junior” Tagline
FDD identified a pattern including “Vinicius Junior” in the page description. Vinicius Junior is a Brazilian soccer player.
5. Russian-Language Taglines
This pattern involves uniformly recurring taglines with various Russian words in their page description. FDD identified repetition of the following words in page descriptions of inauthentic pages: Кофейня (coffee house), Фестиваль (festival), Танцор (dancer), Журналист (journalist), Фотостудия (photo studio), Магазин одежды (clothing store), and Актер (actor).
6. Pages Listed as Vietnamese AIDS Resource Center
FDD identified a pattern of pages with (often identical) human names and no profile photos, all categorized as AIDS Resource Centers and allegedly located in Vietnam.
7. Tagline “a”
Similar to “Hi there, want more beer?” and “Vinicius Junior” taglines, FDD identified another pattern of pages with the single character “a” in their page description.
8. Page Names Matching Description Ending in 15-Digit Numerical Strings
This pattern involves pages with combined names in which the page description is the name of the page followed by a 15-digit number.
9. Four Random Emojis
One pattern includes pages with page descriptions including four random emojis without additional text or content — for example, “☝️ 😏 🤑 😭.”
10. Three Random Emojis and Dates
FDD identified a pattern involving page descriptions with three random emojis and a date and timestamp — for example, “🌱 🌤 🌕 — 2022-11-21 04:11:17.”
11. Identical Japanese Tagline
This pattern involves page descriptions with identical Japanese taglines. Specifically, the pages include the phrase “株投資】誰でも稼げる米国株投資〜富裕層になるための鉄則〜,” which translates to: “[Stock Investment] Investing in U.S. stocks that anyone can earn ~ Iron rules for becoming wealthy ~.”
12. Emojis and Identical Vietnamese Tagline
FDD identified a pattern involving page descriptions with three emojis followed by a period and an identical Vietnamese tagline. This pattern included, for example, “⏮🔌💠. Nghiêm Cấm sử dụng cho mục đích trái pháp luật của Việt Nam,” which translates to “Use for purposes contrary to Vietnamese law is strictly prohibited.”
13. Alleged Government Agents
One pattern includes pages with the title “Agent.” The page names are often identical — for example, multiple pages are named “Agent Jessica Smith.” Several pages with this naming pattern promote financial scams and contain taglines like “Claimed your own winning money from me.” There is a subset of pages with this naming pattern using photos of government employees as profile photos. Pages in this subset are mainly categorized as “Governmental Organizations.”
14. “Online” with Five-Digit String and Variations
FDD identified another pattern involving a page name incorporating the word “online” with a five-digit numerical string at the end. Variations include the phrase “online shope” appended at the end of the page name, often with a two-digit string afterward, and others
15. Elegance Tagline
This pattern involves pages with descriptions including the phrase “Where elegance and excellence converge, a haven of refined beauty.”
16. S.R. in Page Name
FDD identified pages with names including “S.R.” One subset includes two names followed by “S.R_” and a six-digit or four-digit number — for example, “Charles Carter (S.R_123456).” Another subset involves pages named “Md Asadul Islam” with the phrase “S.R Service_” ending in a four-digit or five-digit numerical string.
17. Decimal Names
Another pattern includes a two-word name with three-digit numbers including one decimal space. After the three-digit number, the page names include one other name — “Jack Kim 22.3 Stefan.”
18. Inaccurate Address Information
FDD identified a pattern involving pages with identical or similar human names followed by an address that does not exist. Most included some version of “(York, NY)” and some with more specific addresses — for example, “7th Avenue.” York is a small town of around 3,000 people that does not contain the addresses included in these Whoever created these accounts might have meant to put New York City.
19. Pages categorized as models with locations listed in the page name
This pattern includes pages categorized as models that include parentheses at the end of the page name enclosing names of various U.S. locations — for example, “Elena Johnson (Santa Clarita, CA)” and “Phoebe Smith (Green Bay, WI).”
20. “Hello World” Tagline
FDD identified another pattern involving the phrase “Hello World” in the page description.
21. Generated by Fewfeed v2
One pattern involves pages with the description “This page was generated by fewfeed v2” followed by four random emojis. These pages reference the website Fewfeed, which offers various social media services, including the automatic creation of multiple Facebook pages. The page creation tool default setting creates pages with the tagline “This page was generated by fewfeed v2” with an option to add emojis to the end.
22. Repeated Russian Names + Alphabetical Strings
FDD identified a pattern involving pages with two Russian names connected by a dash or period — such as Irina-Ivanova or Ekaterina.Nikitina — followed by strings of apparently random letters. A subset also includes four-digit numbers after the string of random letters. All pages have the page type “Spa, Beauty & Personal Care” and no page activity.