Featured Article

Google blocks Truth Social from the Play Store — Will Apple be next?

Trump’s social media app is alive and well in the App Store in spite of the same dangerous content

Comment

Truth Social app with crossed-out symbol over it
Image Credits: TechCrunch/Truth Social/Google Play

Google’s decision to block the Truth Social app’s launch on the Play Store over content moderation issues raises the question as to why Apple hasn’t taken similar action over the iOS version of the app that’s been live on the App Store since February. According to a report by Axios, Google found numerous posts that violated its Play Store content policies, blocking the app’s path to go live on its platform. But some of these same types of posts appear to be available on the iOS app, TechCrunch found.

This could trigger a re-review of Truth Social’s iOS app at some point, as both Apple’s and Google’s policies are largely aligned in terms of how apps with user-generated content must moderate their content.

Axios this week first reported Google’s decision to block the distribution of the Truth Social app on its platform, following an interview given by the app’s CEO, Devin Nunes. The former Congressman and member of Trump’s transition team, now social media CEO, suggested that the holdup with the app’s Android release was on Google’s side, saying, “We’re waiting on them to approve us, and I don’t know what’s taking so long.”

But this was a mischaracterization of the situation, Google said. After Google reviewed Truth Social’s latest submission to the Play Store, it found multiple policy violations, which it informed Truth Social about on August 19. Google also informed Truth Social as to how those problems could be addressed in order to gain entry into the Play Store, the company noted.

“Last week, Truth Social wrote back acknowledging our feedback and saying that they are working on addressing these issues,” a Google spokesperson shared in a statement. This communication between the parties was a week ahead of Nunes’s interview where he implied the ball was now in Google’s court. (The subtext to his comments, of course, was that conservative media was being censored by Big Tech once again.)

The issue at hand here stems from Google’s policy for apps that feature user-generated content, or UGC. According to this policy, apps of this nature must implement “robust, effective and ongoing UGC moderation, as is reasonable and consistent with the type of UGC hosted by the app.” Truth Social’s moderation, however, is not robust. The company has publicly said it relies on an automated AI moderation system, Hive, which is used to detect and censor content that violates its own policies. On its website, Truth Social notes that human moderators “oversee” the moderation process, suggesting that it uses an industry-standard blend of AI and human moderation. (Of note, the app store intelligence firm Apptopia told TechCrunch the Truth Social mobile app is not using the Hive AI. But it says the implementation could be server-side, which would be beyond the scope of what it can see.)

Truth Social’s use of AI-powered moderation doesn’t necessarily mean the system is sufficient to bring it into compliance with Google’s own policies. The quality of AI-detection systems varies, and those systems ultimately enforce a set of rules that a company itself decides to implement. According to Google, several Truth Social posts it encountered contained physical threats and incitements to violence — areas the Play Store policy prohibits.

Image Credits: Truth Social’s Play Store listing

We understand Google specifically pointed to the language in its User Generated Content policy and Inappropriate Content policy when making its determination about Truth Social. These policies include the following requirements:

Apps that contain or feature UGC must:

  • Require that users accept the app’s terms of use and/or user policy before users can create or upload UGC.

  • Define objectionable content and behaviors (in a way that complies with Play’s Developer Program Policies), and prohibit them in the app’s terms of use or user policies.

  • Implement robust, effective and ongoing UGC moderation, as is reasonable and consistent with the type of UGC hosted by the app.

And:

  • Hate Speech — We don’t allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, caste, immigration status, or any other characteristic that is associated with systemic discrimination or marginalization.

  • Violence — We don’t allow apps that depict or facilitate gratuitous violence or other dangerous activities.

  • Terrorist Content — We don’t allow apps with content related to terrorism, such as content that promotes terrorist acts, incites violence, or celebrates terrorist attacks.

And while users may be able to initially post such content — no system is perfect — an app with user-generated content like Truth Social (or Facebook or Twitter, for that matter) would need to be able to take down those posts in a timely fashion in order to be considered in compliance.

In the interim, the Truth Social app is not technically “banned” from Google Play — in fact, Truth Social is still listed for preorder today, as Nunes also pointed out. It could still make changes to come into compliance, or it could choose another means of distribution.

Unlike on iOS devices, Android apps can be sideloaded or submitted to third-party app stores like those run by Amazon, Samsung, and others. Or, Truth Social could opt to do what the conservative social media app Parler did after its suspensions from the app stores last year. While Parler chose to make adjustments in order to return to Apple’s App Store, it now distributes the Android version of its app directly from its website — not the Play Store.

To get back into the App Store last year, Parler reportedly agreed to scan for hate speech and remove that content on the iOS version of the app. Truth Social has no policy prohibiting hate speech, though its rules ostensibly prohibit violent threats and inciting physical harm.

While Truth Social decides its course for Android, an examination of posts on Truth Social’s iOS version revealed a range of anti-Semitic content, including Holocaust denial, as well as posts promoting the hanging of public officials and others (including those in the LGBTQ+ community), posts advocating for civil war, posts in support of white supremacy, and many other categories that would seem to be in violation of Apple’s own policies around objectionable content and UGC apps. Few were behind a moderation screen.

It’s not clear why Apple has not taken action against Truth Social, as the company hasn’t commented. One possibility is that, at the time of Truth Social’s original submission to Apple’s App Store, the brand-new app had very little content for an App Review team to parse, so it didn’t have any violative content to flag. Truth Social does use content filtering screens on iOS to hide some posts behind a click-through warning, but TechCrunch found the use of those screens to be haphazard. While the content screens obscured some posts that appeared to break the app’s rules, the screens also obscured many posts that did not contain objectionable content.

Assuming Apple takes no action, Truth Social would not be the first app to grow out of the pro-Trump online ecosystem and find a home on the App Store. A number of other apps designed to lure the political right with lofty promises about an absence of censorship have also obtained a green light from Apple.

Social networks Gettr and Parler and video sharing app Rumble all court roughly the same audience with similar claims of “hands-off” moderation and are available for download on the App Store. Gettr and Rumble are both available on the Google Play Store, but Google removed Parler in January 2021 for inciting violence related to the Capitol attack and has not reinstated it since.

All three apps have ties to Trump. Gettr was created by former Trump advisor Jason Miller, while Parler launched with the financial blessing of major Trump donor Rebekah Mercer, who took a more active role in steering the company after the January 6 attack on the U.S. Capitol. Late last year, Rumble struck a content deal with former President Trump’s media company, Trump Media & Technology Group (TMTG), to provide video content for Truth Social.

Many social networks were implicated in the January 6 attack — both mainstream social networks and apps explicitly catering to Trump supporters. On Facebook, election conspiracy theorists flocked to popular groups and organized openly around hashtags such as #RiggedElection and #ElectionFraud. Parler users featured prominently among the rioters who rushed into the U.S. Capitol, and Gizmodo identified some of those users through GPS metadata attached to their video posts.

Today, Truth Social is a haven for political groups and individuals who were ousted from mainstream platforms over concerns that they might incite violence. Former president Trump, who founded the app, is the most prominent among deplatformed figures to set up shop there, but Truth Social also offers a refuge to QAnon, a cultlike political conspiracy theory that has been explicitly barred from mainstream social networks like Twitter, YouTube and Facebook due to its association with acts of violence.

Over the last few years alone, that includes a California father who said he shot his two children with a speargun due to his belief in QAnon delusions, a New York man who killed a mob boss and appeared with a “Q” written on his palm in court and various incidents of domestic terrorism that preceded the Capitol attack. In late 2020, Facebook and YouTube both tightened their platform rules to clean up QAnon content after years of allowing it to flourish. In January 2021, Twitter alone cracked down on a network of more than 70,000 accounts sharing QAnon-related content, with other social networks following suit and taking the threat seriously in light of the Capitol attack.

A report released this week by media watchdog NewsGuard details how the QAnon movement is alive and well on Truth Social, where a number of verified accounts continue to promote the conspiracy theory. Former president Trump, Truth Social CEO and former House representative Devin Nunes, and Patrick Orlando, CEO of Truth Social’s financial backer Digital World Acquisition Corporation (DWAC) have all promoted QAnon content in recent months.

Earlier this week, former president Trump launched a blitz of posts explicitly promoting QAnon, openly citing the conspiracy theory linked to violence and domestic terrorism rather than relying on coded language to speak to its supporters as he has in the past. That escalation paired with the ongoing federal investigation into Trump’s alleged mishandling of high-stakes classified information — a situation that’s already inspired real-world violence — raises the stakes on a social app where the former president is able to openly communicate to his followers in real time.

That Google would take a preemptive action to keep Truth Social from the Play Store while Apple is, so far, allowing it to operate is an interesting shift in the two tech giant’s policies over app store moderation and policing. Historically, Apple has taken a heavier hand in App Store moderation — culling apps that weren’t up to standards, poorly designed, too adult, too spammy, or even just operating in a gray area that Apple later decides now needs enforcement. Why Apple is hands-off in this particular instance isn’t clear, but the company has come under intense federal scrutiny in recent months over its interventionist approach to the lucrative app marketplace.

Update, 9/2/22, 2:53 PM ET: Following publication, Google reinstalled another previously banned social app, Parler, to its Play Store, saying it was now in compliance with its policies. Noted a spokesperson in a statement:

“As we’ve long stated, apps are able to appear on Google Play provided they comply with Play’s developer policies. All apps on Google Play that feature User Generated Content (UGC) are required to implement robust moderation practices that prohibit objectionable content, provide an in-app system for reporting objectionable UGC, take action against that UGC where appropriate, and remove or block abusive users who violate the app’s terms of use and/or user policy.” 

Justice Department in early stages of filing an antitrust lawsuit against Apple, says report

After the FBI raid at Mar-a-Lago, online threats quickly turn into real-world violence

 

More TechCrunch

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

18 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

3 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies