The European Union is close to reaching an agreement on a set of new rules aimed at protecting internet users by requiring large tech companies such as Google and Facebook to increase their efforts to combat the spread of illegal content, hate speech, and misinformation.
Officials from the European Union were negotiating the final details of the legislation, dubbed the Digital Services Act, on Friday. It’s part of a sweeping overhaul of the EU’s digital rules, emphasizing the EU’s role as a global leader in the fight to limit the power of online platforms and social media companies.
While the rules must still be approved by the European Parliament and the European Council, which represent the 27 member countries, the EU is far ahead of the US and other countries in terms of drafting regulations to force tech companies to protect people from harmful content that spreads online.
Negotiators from the EU’s executive Commission, member states, and France, which currently holds the rotating EU presidency, were working to reach an agreement by the end of Friday, ahead of Sunday’s French elections.
The new rules would hold tech companies more accountable for content on their platforms, with the goal of protecting internet users and their “fundamental rights online.” Online marketplaces like Amazon would have to beef up mechanisms to flag and remove illegal content like hate speech, while social media platforms like Facebook and Twitter would have to do the same for questionable products like counterfeit sneakers or dangerous toys.
These systems will be standardized to work in the same way across all online platforms.
According to the EU’s single market commissioner, Thierry Breton, “any national authority will be able to request that illegal content be removed, regardless of where the platform is established in Europe.”
Companies that break the rules could face fines of up to 6% of their annual global revenue, which could amount to billions of dollars for tech giants. Repeat offenders may face a ban on entering the EU market.
The Digital Services Act also includes provisions to better protect children, such as the prohibition of advertising directed at children under the age of thirteen. It would be illegal to target online ads to users based on their gender, ethnicity, or sexual orientation.
There would also be a ban on “dark patterns,” which are deceptive techniques used to persuade users to do things they didn’t intend to do.
Tech companies would be required to conduct regular risk assessments on illegal content, disinformation, and other potentially harmful information, and then report on whether they are doing enough to address the issue. They’ll have to be more open about their content moderation efforts and provide information to regulators and independent researchers. This could include requiring YouTube to hand over information about whether its recommendation algorithm has been directing users to more Russian propaganda than usual.
The European Commission is expected to hire more than 200 new employees to enforce the new rules. To pay for it, tech companies will be assessed a “supervisory fee,” which, depending on the negotiations, could be as high as 0.1 percent of their annual global net income.
Last month, the EU reached a similar political agreement on the Digital Markets Act, a separate piece of legislation aimed at limiting the power of tech behemoths and ensuring that smaller competitors are treated fairly.
Meanwhile, Britain has drafted its own online safety legislation that includes prison sentences for senior executives at tech companies who fail to comply.