A Go-To Guide for Trust & Safety, Tech’s Most Pressing Issue
As venture capitalists, we are in the business of constantly hunting for the future of, well, pretty much everything. But although innovation is all around us, it is rare that we come across an opportunity to not only invent a new product, feature, service, or business model, but rather to be part of the launch of a new trailblazer service in a new business vertical which is set to make a paradigm shift in the modern world. Lately, thanks to Grove Ventures’ newest investment in ActiveFence, my team and I were lucky enough to witness such a creation from the front row, with the rapid acceleration of the Trust and Safety industry. This is going to be an in-depth dive into the subject and my thesis (longer than my usual, as this is a new territory for many). So sit back, relax, and get ready to understand why the world is about to witness a Trust and Safety technology boom.
Trust and Safety: From a Small Niche to Everyone’s Problem
Trust and Safety is a term commonly used on platforms where people come together and interact. These platforms include social media applications, marketplaces, ranking sites, dating services, video streaming services, file sharing platforms, and others. To do so, most users want to be able to trust these platforms. Trust is gained when people are confident and comfortable that they are not in any danger or risk, that they are safe. When they know that their private information is treated carefully, when they themselves (and their children) are treated respectfully and fairly and when they are not exposed to any harmful content. Together, these two words — trust and safety — describe the change in attitude and mindset of online businesses.
Maintaining users protected from things like harmful content, disinformation campaigns, child abuse, pedophilia, bullying, spam, fraud, etc.? De facto, this problem is as old as the internet itself. But things changed dramatically, and Trust and Safety became everyone’s problem. A decade ago, the written guidelines for Facebook’s online safety were about a page long, and often summarized internally as:
“If something makes you feel bad in your gut, take it down.”
Fast forward more than ten years, and the naïve, one-page long rules of Trust & Safety are over: Big tech platform Safety Guidelines are as lengthy as a thick book, huge content moderation teams have been formed, advertisers started hiring Brand Safety Officers and even the U.S. president himself, Joe Biden, is involved, pointing blame at social media platforms for not taking enough action and was quoted this July saying: ‘COVID misinformation on Facebook is killing people’.
One of the first people to understand this trajectory was ActiveFence’s CEO Noam Schwartz. He and I had lunch at NYC around Union Square. Instead of being focused on the Ramen we were eating, Noam was preoccupied with showing me horrific videos online that ruined my appetite. After having his first daughter, Noam became obsessed about fixing the internet’s safety issues. I was so caught up with his passion so that the moment he came back to Israel and informed me he was planning to create a company to tackle this issue, I knew I wanted to be a part of their journey. With time, I found myself adding many of my closest friends to their team. Before we talk about where we are today, though, let’s start with the basics.
The First Wave: The Rise of Content Moderation
Online failures are nothing new, but since the mid-nineties, an important question that in many ways shaped the internet rose: should online service providers even be treated as publishers or distributors of content created by its users?
In the United States, Section 230 of the U.S. Communications Decency Act (CDA) of 1996 provides immunity for website platforms (“operators of interactive computer services”) from third-party content. However, it still requires providers to remove material illegal on a federal level such as copyright infringement, and material violating sex trafficking laws. The extent of what additional types of unsafe content should be banned in law and impose liability is continuously a major issue for public debate. Nudity, for instance, can appear in a fine arts painting or in sexually suggestive content. Obviously, the context makes a difference.
These are very complex questions that directly relate to questions of freedom of speech. As law professor Jeffrey Rosen said many years ago of Facebook, these platforms have “more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president.” Evelyn Douek wrote why we need platforms to be more responsible gatekeepers.
With time, and due to increasing pressure from users, legislators and advertisers, big platforms hosting user-generated content (UGC) started hiring more and more teams of content moderators, and clearly set the boundaries of the content they allow in. Or so they thought.
New questions, context issues, fraud and users who could outsmart the platforms popped every day and moved forward together with the platforms’ guidelines themselves. Instagram bans nudity? Models post pictures with extremely tiny swimsuits. Moderation is a profoundly human decision-making process which heavily shapes how we experience the world. What is the right decision to make, how should one decide, who should define it and who should enforce it? The internet needs checks and balances to keep users safe and avoid high-profile catastrophes for platforms and advertisers.
Now is the Time: Malicious Content is Part of a Much Bigger Problem
Offensive and malicious content were a problem for the internet for a long time. So why did we see a market opportunity in trust and safety specifically now?
Because the problem is much bigger than it used to be. It is not only an issue of videos with acts of violence online. We are talking about disinformation campaigns, advanced types of fraud or identity theft, misuse of gaming or e-commerce platforms etc., etc. New ways to create harm keep popping up almost daily, and Trust and Safety issues are gaining more focus than ever from users, governments, activists and platforms, which are increasingly interested in finding solutions that can scale quickly and work with minimum delays, before the damage is done. In addition, analyzing and taking apart harmful content is much better than it used to be a decade ago with AI, fraud and behavior analysis solutions’ progress. As online criminals evolve in parallel to the industry of catching their crimes, the need for solutions is here to stay. Similarly to real-world crime, we always need to be proactive, think outside the box and be one step ahead of the wrongdoers.
There are so many examples of abusive online activities and online scandals these days that it is almost impossible to count them:
Disinformation:
- Reports on election meddling in Latin America, Iranian influence operations aimed at the U.S. and U.K. audiences, and an increasing number of disinformation campaigns — so much so that even the U.S. Senate held a hearing on the topic
- An issue that become one of the biggest of our time: fake news and the boundaries of freedom of speech and the repackaging of truthful content into outraging, clickbait articles that confirm biases and further polarization.
Violence:
Child Safety:
- The New York Times exposed that the Internet is overrun with images of child sexual abuse and OneZero reported on how rampant child exploitation is online
Hate Speech:
- Posts from Myanmar’s military were promoted by Facebook’s algorithm according to a report by the human rights group Global Witness which later led to U.N. human rights experts saying that the company had played a role in spreading hate speech
Fraud:
- Multimillion-Dollar ad fraud schemes based on tracking users and abusing permissions through apps
Similarly to what has happened in the past with intellectual property infringement and privacy, today, the internet is no longer willing to tolerate Trust and Safety issues. Be it ‘low risk’ abuses like spam or toxic language, ‘severe risk’ abuses like cyberbullying, doxxing and harassing, or ‘extreme risk’ abuses like selling child pornography online, spreading misinformation, and recruiting new members to terrorist groups.
It is time to act, and to demand responsibility.
Current Solutions Leave Tech Companies Frustrated and Exposed
The solutions already available for some Trust and Safety issues are far from perfect.
If we look at the available solutions only of unsafe content moderation specifically, they can be classified into three main groups, all of them with tremendous challenges for the industry: Human content moderators; Community moderation efforts; and AI solutions.
Human content moderators became front-line defense, rather than tackle what gets through the first line of defense through proactive moderation. They are hired in bulks by big tech companies, which are often to heightened scrutiny over their moderation practices and employment terms. “Meet the people who scar themselves to clean up our social media networks”, says an opinion column by Sarah T. Roberts at Macleans, “They are the judges, juries and executioners of our social media lives — yet they work hard, low-paying jobs across the planet”. As content moderation heavily relies on people in developing countries, some employees are not being paid enough, report stress and traumatic working conditions caused by viewing gory content all day, that may in some cases might even mount up to developing PTSD. And content moderators are not alone. Even fact-checkers that are being employed at big tech companies can be subjected to harassment by political opponents. We see content moderation decisions causing real-world harms to workers, but also to users: censorship, attention only to reported content, restricting users from using the platforms for a certain amount of time, or not allowing businesses to sell commodities, from women’s health startups to books with titles that contain ‘problematic’ words. Over-moderating can cause to severe to serious violations of freedom of speech online. In addition, it is extremely hard to become a subject-matter expert on so many content types. Moderators need to understand the rules in different geographical locations, the company’s ever-changing policies, know their users’ rights and preferences, subtleties in moderation and of course, be experts on the types of malicious content (violence, criminal behavior, dangerous organizations, terrorism, hate speech, regulated goods, drugs, firearms, self-injury and suicide, nudity, sexual exploitation of children, violent and graphic content, fake accounts, spam, etc.) and actors, almost like ‘content investigators’.
Another way to involve humans in the decision-making process is through community moderation efforts, such as flagging and reporting mechanisms of unsafe content by the users themselves. Reddit, for instance, gave the power to its users: every subreddit, once it hit a certain size, developed its own rulebook about what is and isn’t allowed as part of the discussion, and the users responsible for enforcing the rules are the community’s moderators, or mods, with varying capabilities. In 2012 Reddit’s CEO at the time said the site stood for free speech and would not “censor things simply because they were distasteful.” Since then the company has created a list of content rules, banning content such as sexual material involving minors, and has recently removed a subreddit which objectified female Olympians after VICE revealed its disturbing content. This is just one example that shows that content moderation that relies on community policing can be inconsistent and confusing. But other issues may occur: a person with a lot of followers is more likely to be reported. When a public figure’s profile is taken down, it may also influence or pressure other companies to follow the lead. Some actors may also have a greater ability to influence companies’ decisions than others. Users who wish to appeal a moderation decision sometimes have no feasible way for doing so or a way to understand why exactly their profiles or content has been removed. Even when companies publish transparency reports, they are not always extensive enough. In addition, many online spaces still have no admins, and even when they do, the admins themselves may be biased.
The power of Artificial Intelligence is also playing an important role in solving some content moderation and scalability issues, at least for companies that can afford it. Automation should have helped, but with the increasing use of AI, many problems of using a machine over the use of the human brain and its ability to put things in context also arose. There are no universal rules on speech and public views change from region to region and from time to time, which AI nay struggle with. In the wake of the coronavirus pandemic, reliance on automated tools increased. However, automated systems are limited in their capability of consistently identifying content correctly, which poses a risk to freedom of expression online, and results in many errors. Automated moderation tools struggle with some types of content, as well as with putting content in the right context. The environments constantly change, people who use technology to harm catch up to outsmart ML models and ethical issues arise. “One of the pieces of criticism we get that I think is fair is we’re much better able to enforce our nudity policies, for example, than we are hate speech,” said Mark Zuckerberg in a call with analysts in 2018. “The reason for that is it’s much easier to make an AI system that can detect a nipple than it is to determine what is linguistically hate speech, so this is something I think we will make progress on, and we’ll get better at over time. These are not unsolvable problems.” Not unsolvable, perhaps, but even with the progress made in understanding context, it is neither fast nor good enough. This means, for instance, that users complain that more attention is given to policing users’ bodies than to speech that may be harmful.
And we are not done yet: Decisions that companies make to help their users with privacy concerns, such as encryption and anonymization of their platforms, can also cause content moderation issues: the platforms may become hiding places for perpetrators, and affect law enforcement’s ability to fight crime.
The result? Tech platforms are helpless.
They spend a lot of money solving Trust and Safety problems, setting rules and policies, enforcing them and being transparent, yet every new successful social media platform immediately raises concerns by users about how safe it really is.
Every time new questions arise about boundaries, checks and balances: Is a picture of a child in a bath with his parent considered nudity? Is cutting someone’s head off OK if it is in a cartoon?
Concerns about content and context grow and measures are not keeping up with the pace.
Are the tech platforms with so much personal data protected and performed adequate penetration testing or perhaps they are filled with deepfakes and bots?
Will brands continue to advertise on the platforms when they are so worried?
Will the right toolkit fall from the sky/think tank group (like this ISD guide to disinformation detection)?
Law and Order: State, Self, and Community Regulations
So, we understand that the Trust & Safety problem has developed with time and has grown to enormous proportions. We realize that it is high time to offer solutions. But who exactly should be in charge of the internet?
The immediate suspects are regulators that should work in cooperation, as it was proven in the past that it can work to a great extent (the EU GDPR regulations, for instance). After all, regulators are in many ways in charge of our safety in the real world, so why shouldn’t we demand that they will be in charge of our virtual safety as well?
At the moment, some countries are already taking action on that front:
The European Parliament and the European Council will later this year need to discuss and approve the Digital Services Act, and in preparation have already published an extensive study on online platform content moderation systems and practices by key online platforms, as well as the EU regulatory framework and recommendations for improvement.
Germany has created one of the most progressive legislation in the evolution of internet law — the Network Enforcement Act, known as NetzDG — which stated that illicit content should be removed within 24 hour of notice from either a user or the government, with a fine of up to 50M euros. This is considered an unprecedented attempt to fight against hate speech while realizing the importance of civil liberties, such as free speech.
Ofcom, the UK’s regulator for communications services, has commissioned Cambridge Consultants to produce a report titled ‘Use of AI in Online Content Moderation’ as a contribution to the evidence base on people’s use of and attitudes towards online services. “In recent years a wide-ranging, global debate has emerged around the risks faced by internet users with a specific focus on protecting users from harmful content”, the report says in the introduction. Ofcom itself is expected to be granted the authority to regulate the forthcoming Online Safety Bill in the UK. This bill puts increased liability on terrorism and child sexual abuse materials, with failure to comply resulting in 10% of global turnover fines. The House of Lords also published a report in July named ‘Free for All? Freedom of Expression in the Digital Age’.
Some countries have already passed laws. In India, The Information Technology Rules 2021 requires social media companies to remove materials that are considered prohibited within 36 hours of notification and provides 72 hours to provide personal details of the person who posted the infringing content. In Australia, The 2019 Criminal Code Amendment which came into being after the live-streamed Christchurch massacre, says that failure for online intermediary services to report live-streamed content such as terrorist acts, murder, torture, rape or kidnapping, thought to be taking place on Australian soil, must be reported to the Federal Police in Australia.
***
The internet is a wild west and regulators are not operating in a void. They need to act very quickly alongside the evolution of the internet, gain expertise in many fields, subjects, methods and platforms, and sometimes they lack the resources and access required for effective, timely actions. Thankfully, regulators are not alone in their battle.
Public and industry organizations also push for better internet safety including The Santa Clara Principles on Transparency and Accountability in Content Moderation and the TSPA which is advancing the Trust and Safety profession through a shared community of practice.
Tech platforms themselves have already started to act. Major platforms publish transparency reports, including Facebook, YouTube, TikTok, Reddit, as well as Yelp and TripAdvisor. Companies also allocate significant resources in both talent and budgets, to solve Trust and Safety issues: Facebook heavily invested in Policy and Integrity teams, Instagram changed the platform to restrict it and protect their teen users. Apple plans to scan US iPhones for child abuse imagery. Discord already has its own moderation academy.
A Very Big Market, A Very Big Financial Opportunity: Budgets, M&A’s, Funding Rounds and a Fight Over Trust and Safety Talent
The numbers behind the newly born Trust and Safety industry are incredible: tech companies spend an increasing amount of money to keep their platforms safe and user-friendly, and significant M&A deals in the field are beginning to surface. According to MarketWatch, the global content moderation solutions market size alone is projected to reach USD 13630 million by 2027.
TikTok’s Director of Government Relations and Public Policy, Theo Bertram, told British politicians on September 2020 that at the time TikTok had over 10,000 people working on trust and safety worldwide.
Twitter’s spokesperson admitted again only a few weeks ago that the platform is working on making it “a safer place for online engagement… improving the speed and scale of our rule enforcement is a top priority for us”.
Facebook’s moderation workforce includes more than 15,000 people who decide what can and cannot appear on the platform after the company released a 2017 report acknowledging that “malicious actors” from around the world had used the platform to meddle in the American Presidential election. It also appointed its own Oversight Board with experts from all over the world.
Discord bought Sentropy, a company that makes AI-powered software to detect and remove online harassment and hate. The company also polices itself with 15% of its employees now part of their Trust and Safety team, a unit that did not exist at all in 2017.
Microsoft acquired RiskIQ to better understand threats to vulnerable internet-facing assets.
Funding rounds are also gaining momentum, and Hive raising $85M for AI-based APIs to help moderate content, is only one example.
Nonetheless, companies are also fighting for talent: TikTok, for instance, is luring Facebook moderators to fill their new Trust and Safety hubs.
But as problems continue to pile up at a dizzying pace, stand-alone solutions do not provide a satisfactory solution. As goes with policing forces in real life: we will always need the police, and we will still always have criminals. And that’s where we need experts to save us.
Clean up Your Platform with ActiveFence: A Different Approach, a Big Promise
Grove Ventures’ latest investment, ActiveFence, presents a different, proactive approach for Trust and Safety and for detecting malicious activities online at scale. We’ve chosen to partner with this outstanding team as we identified the importance of a holistic, cross-platform approach to Trust and Safety, as well as the military-grade intelligence abilities required to tackle these pressing issues. ActiveFence has the potential to solve many of these problems at a critical point in time where everyone realizes that it’s time to ‘fix the internet’ and take responsibility on the content we see online.
ActiveFence is operating in the past few years to create and leverage the world’s largest database of malicious content to proactively detect even the most sophisticated bad actors across a broad spectrum of threat categories. Finally out of stealth with more than $100M in total funding, customers from some of the world’s largest tech companies, and a broad range of capabilities and experience from Israel’s best Intelligence Units and tech companies in things like data mining, analysis, and handling big data at scale, the ActiveFence team has the expertise and understanding of the new category they spearhead, and how to look at things in the right context. We are positive that their intelligence-led, customizable, and out-of-the-box AI-powered solutions will do good in the world, both online and offline.