Middle East

How Facebook became Opium of the People

Franti? by ek Vrabel / Prague

In the fight against disinformation, it can be difficult to identify an enemy. Journalists, politicians, governments, and even grandparents have been accused of allowing the spread of falsehoods online.
None of these groups are completely innocent, but the real enemy is more mediocre. As Facebook whistleblower Frances Haugen testified late last year, social media-specific algorithms allow access to disinformation.
Since its launch in 2004, Facebook has grown from a student social networking site to a surveillance monster that destroys social cohesion and democracy around the world. Facebook collects large amounts of user data, including detailed facts such as weight and pregnancy status, and maps the user’s social DNA. The company has since sold this information to anyone who wants to “micro-target” 2.9 billion users, from shampoo makers to intelligence agencies in Russia and China. In this way, Facebook allows third parties to manipulate their minds and trade in the “human future.” This is a predictive model of the choices an individual may make.
Around the world, Facebook has been used to instill distrust in democratic institutions. The algorithm has fostered real-world violence, from genocide in Myanmar to recruiting terrorists in South America, West Africa and the Middle East. Lies about US fraud promoted by former President Donald Trump flooded Facebook prior to the January 6 riots. Meanwhile, in Europe, Facebook has enabled the terrible effort of Belarusian magnate Alexander Lukashenko to use immigrants as a weapon against the European Union.
In the Czech Republic, disinformation originating from Russia and shared on the site is overwhelming the Czech cyberspace, thanks to Facebook’s malicious code. According to an analysis conducted by my company, the average Czech citizen is exposed to 25 times more disinformation about the Covid-19 vaccine than the average American. The situation is so dire and the government’s actions are so inadequate that Czechs rely on civil society (including volunteers known as Czech elves) to monitor and counter this effect. increase.
So far, efforts to mitigate Facebook’s threat to democracy have failed miserably. In the Czech Republic, Facebook has partnered with Agence France-Presse (AFP) to identify harmful content. However, with only one part-time employee and only 10 suspicious posts per month, these efforts are plunging into a sea of ​​disinformation. Facebook files published by The Wall Street Journal confirm that Facebook will take action against “only 3-5% of hate speech.”
Facebook offers users the ability to opt out of custom and political ads, which is a token gesture. Some organizations, such as ranking digital rights, require the platform to disable ad targeting by default. That’s not enough. Microtargeting, which underlies Facebook’s business model, relies on artificial intelligence to attract user attention, maximize engagement, and nullify critical thinking.
In many respects, microtargeting is a digital version of the opioid crisis. However, Congress is actively working to protect people from opioids through legislation designed to increase access to treatment, education, and alternatives. To stop the world’s reliance on fake news and lies, lawmakers must be aware of the disinformation crisis about what it is and take similar actions, starting with proper regulation of microtargeting. ..
The problem is that no one outside of Facebook knows how the company’s complex algorithms work. And it can take months, if not years, to decode them. This means that regulators have no choice but to rely on Facebook’s own people to guide the factory. To encourage this cooperation, Parliament must provide full civil and criminal exemption and financial compensation.
Regulation of social media algorithms may seem complicated, but it’s less successful when compared to the larger digital hazards on the horizon. “Deepfake” (manipulating AI-based videos and images on a large scale to influence opinions) is rarely talked about in Congress. While lawmakers are plagued by the threat posed by traditional content, deepfake poses even greater challenges to personal privacy, democracy, and national security.
Meanwhile, Facebook is becoming more dangerous. According to a recent study by the MIT Technology Review, Facebook is funding false information by “paying millions of dollars to Clickbait Actor’s bankroll” through an advertising platform. And CEO Mark Zuckerberg’s plans to build the “fusion of physical, augmented reality, and virtual reality” metaverse should scare regulators everywhere. Imagine the potential damage they could cause if unregulated AI algorithms were allowed to create new immersive realities for billions of people.
In a statement after a recent hearing in Washington, DC, Zuckerberg reiterated his previous offer: Regulates us. “I don’t think private companies should make all the decisions themselves,” he wrote on Facebook. “We promise to do the best we can, but at some level, the right body for assessing trade-offs between social stocks is a democratically elected parliament.”
Zuckerberg is right: Congress is responsible for acting. But Facebook also has a responsibility to act. It can show Congress what social inequality it continues to create and how it creates them. Until Facebook is guided by its own expert know-how to scrutinize the algorithm, the fight against disinformation will be unbeatable and democracy around the world will continue to be at the mercy of malicious and rebellious industries. — Project Syndicate


* František Vrabel is the CEO and founder of Semantic Visions, a Prague-based analytics company that collects and analyzes 90% of the world’s online news content.



http://www.gulf-times.com/story/708524/How-Facebook-became-the-opium-of-the-masses How Facebook became Opium of the People

Back to top button