Between the Lines

On freedom of speech in an age free of accountability.

February 15, 2018

Keeping Our Internet Ecosystem Green

Social media is a way of life for us in this day and age. Most teens have various social media accounts. Platforms like Twitter and YouTube are frequented by millions. Internet stars have developed on these platforms, gathering followers, fame, fortune, and influence on young people’s lives. It’s time the content they post is evaluated and properly regulated.

Recently, many platforms have faced problems with censorship, particularly YouTube. The platform’s biggest star, PewDiePie, was attacked for posting anti-Semitic jokes and Nazi imagery. Later came the discovery of channels dedicated to videos featuring exploited children in revealing clothing, which had managed to dodge YouTube’s child safety guidelines. Most recently, vlogger Logan Paul faced backlash after posting a video in which he filmed the body of a suicide victim in Japan.

That this content can be uploaded and seen by millions of viewers is unacceptable, especially since the majority of subscribers to these channels are young people. (Logan Paul’s 15 million subscribers are mainly white females from age 11 onward.) Our society is powered by the Internet. It is our primary source of news, entertainment, and communication. But, as the old saying goes, “you are what you eat.” By this logic, the content that is allowed on the Internet should not be inappropriate, hateful, or insensitive. Society is conditioned by what is promoted online. An onslaught of harmful media will affect our younger generations, desensitize us to hate, and promote a world of indifference. In other forms of media, regulations are already in place to prevent this. The Federal Communications Commission has a policy prohibiting profanity on public television. Anti-Semitic and racist content could never be aired on television, so there is no reason it should be posted online.

Freedom of speech is one of the most critical parts of our Constitution. But it’s common sense that freedom of speech means the freedom to express oneself, not the freedom to harm, be a bigot, or instigate hatred with volatile opinions. In society, there are general rules–you can’t just walk into a room and make a Nazi joke without knowing it’s inappropriate, and it’s widely agreed that explicit racism isn’t something one can exhibit without facing severe consequences. Why should these rules not apply online as well?

The veil of anonymity provided by the computer screen allows individuals to express inflammatory opinions without consequences and, in the process, find other people online who share those opinions to spawn a digital clan of savagery. Freedom of speech is not an invitation to be a public menace, and it has become too easy for barbaric behavior to slide online.

The problem is, YouTube does have protections and regulations set up, such as child-restrictive mode, flags, and “strikes” against users who post harmful content. So why did Logan Paul’s video, which was reportedly reviewed before it was posted, get 6 million views before it was taken down? The answer is the critical flaw in artificial intelligence. An algorithm can distinguish offensive words or images, but lacks a moral code or any ability to judge using ethics. YouTube hosts 1 billion active users each month, and with 300 hours of video uploaded to the site every minute, it does seem that artificial intelligence is the only realistic or feasible method of regulation. But still, heavier guidelines must be set in place. Perhaps an algorithm could detect certain words, images, or tags, and these filtered videos could be reviewed by real people before being posted. Stricter punishments against users who abuse the terms of agreement, such as suspension, could help as well.

In the end, the Internet is the primary way we connect, learn, and grow. Just like our natural environment, it should be a healthy, civilized place where rules and freedom do not contradict each other, but maintain order. And just as the Internet has dark corners and harmful content, real life isn’t perfect. But we can, and always should, try to do better.

Leave a Comment

The New Big Brother

In the aftermath of Logan Paul’s buffoonish video of a dead body in Japan’s suicide forest, YouTube and other social media platforms have been forced to reckon with the immoral and inappropriate materials that they unknowingly distribute. This reckoning comes after months of pressure and scrutiny over the content of these platforms, from Twitter’s banning of Milo Yiannopoulos to the outrage over monetized channels making money off of child exploitation. Logan Paul may not have been social media’s first controversy, but it was the controversy that finally pushed YouTube to change the way that it monetizes and regulates content.

This is dangerous.

Historically, platforms have failed at restricting content. This is because, although censorship may seem like an easy solution, it is not an effective one. Previous attempts from YouTube to regulate content through the use of keywords lead the platform to demonetize thousands of LGBT-related videos. While trying to restrict violent and sexual content, Facebook removed photos of breast-feeding moms and the historically-important photo of the Napalm Girl from the Vietnam War. All of these decisions were met, rightfully, with public uproar. Every attempt at censorship has lead to little improvement of content at the expense of free speech. However, YouTube’s most recent regulation is its most harmful. In an effort to regulate monetized channels, the platform demonetized all small channels. These smaller channels are now paying for Paul’s actions, literally. This decision will limit the ability of small channels to gain visibility for years to come.

Censorship always hurts the least powerful. These small YouTuber’s whose channels will suffer are only the first and probably the least important. The restriction of speech on platforms usually comes at the behest of governments using platforms as a tool to control the flow of information. Nations like China, Egypt, and Israel control social media as a way to control their population. For example, Egypt uses social media to shut down important conversations on governmental torture and brutality. At the Indian government’s request, Twitter users sympathetic to Kashmiri independence were banned. Facebook blocked the co-author of the Panama Papers for criticizing the Maltese government. In Israel, the government pressures platforms to uphold Israeli law by censoring Palestinians. The censored information ranges anywhere from photos and poetry to, in the case of Tamara Abu Laban, posting the words “forgive me” in Arabic on her status. For that crime, Abu Laban was arrested and sentenced to a fine of 11,500 shekels and five days of house arrest.

Outside of simply governments, platforms often put the tools of censorship in the hands of users, who often target speech that they dislike. In particular, Twitter has a problem with  users seeking to prevent speech they dislike by mass reporting of users and content. Although this tactic is prevalent on both sides of the aisle, it is journalists and members of the #MeToo movement who have take the largest hit. Ukrainian newsite Liga was blocked from Facebook following false reports of nude content on their page. Rose McGowan was suspended from Twitter in the early days of her campaign against Harvey Weinstein. Even more damaging for Twitter is the perception that it applies its rules selectively, with white supremacist accounts remaining up and harassment reports often going unanswered. It took Milo Yiannopoulos’ targeted racial abuse of Leslie Jones to have his account permanently suspended.

But while some cases are cut and dry, others are not. People perceive things differently. What may simply be political speech to some may be offensive to another. The distinction isn’t always clear, such as in the case of Alex Zaragova, whose article on sexual harassment was removed from Facebook for its opening line: “Dear dudes, you’re all trash.” Was it humor or just an eye-catching opening? An attempt to draw awareness to the complicity of many men in the sexual harassment of women? Or was it blatant sexism? Where do we draw the line? How do we distinguish the political and the hateful, the propaganda and the opinion?

These are questions that have no simple answer. That is why increased censorship is such an ineffective solution to disturbing content. Any attempt to restrict speech only complicates the problem. Instead, platforms should work to enforce the rules that they already have in place. Twitter must start taking harassment reports seriously. Facebook must double down on its commitment to warn users of fraudulent content. Platforms must practice transparency and due process. Users need to know that the rules are applied consistently and fairly. Likewise, platforms need to take responsibility for the decisions, both good and bad, that they make. Only then can the dream of a free and safe internet be realized for all people.

Leave a Comment

The Little Hawk • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Donate to The Little Hawk
$2385
$5000
Contributed
Our Goal