Security for the Virtual World

Social media is seen to be one of the most revolutionary developments in today’s internet age. The mechanism is the virtual existence of individuals through multi-channels that consist of networking, content information and sociability. Primarily, a social media platform collects data that requires a security protocol and regulation to maintain its access. The regulatory bodies

Social media is seen to be one of the most revolutionary developments in today’s internet age. The mechanism is the virtual existence of individuals through multi-channels that consist of networking, content information and sociability. Primarily, a social media platform collects data that requires a security protocol and regulation to maintain its access. The regulatory bodies present on social media play an imperative role in its system. However, monitoring social media is a controversial topic, for it may be a necessity but is also considered an invasion. 

In recent times, high-profile scandals related to electoral interference, fake news, misinformation, violations of data privacy, and suppression by anti-democratic regimes have cast a cloud over social media platforms. It has become integral to develop appropriate procedures to eradicate this behavior. Predominantly, the most popular platforms have begun to implement certain regulations that disable such practices.

Facebook, which owns Instagram, has more than 35,000 people around the world working on safety and security, and content removal statistics. Between July and September 2019, it took action on 30.3 million pieces of content of which it found 98.4% before any users flagged it. If illegal content, such as “revenge pornography” or extremist material, is posted on a social media site, the individual’s account is disabled and a monitoring report of the user information is sent to the head of the regulations department. It is seen that Facebook has given a security blanket to certain political and social practices, which may compromise its regulations. Trump’s 2016/2017 campaign, for instance, was an intense issue that Facebook faced in diversion of voters through data transfer to a data company; Cambridge Analytica.

Similarly, Instagram has implemented the majority of the regulations that Facebook has introduced, most importantly limiting individual online behavior. Instagram has a banning policy that targets a starter warning, if still violated then the account is banned. Instagram focuses on content sharing and creation largely, its hyper-sensitive approach to cultural appropriation and triggering content enables it to distinguish what needs to be removed and what can be censored. Most of the content that is observed is based on its regional identification, for instance, most music and data information is not available in certain countries.

Similarly, certain culture shocks and protests regarding racial stereotypes are limited to certain regions. This works in a manner that may benefit or compromise the security regimen of the platform. Black Lives Matter protests were considered to be one of the largest campaign hits that created turmoil on Instagram, many influencers were targeted through the forum who would not use their stardom to address the ongoing issue.

Twitter’s head of public policy strategy, Nick Pickles, said: “Twitter is committed to building a safer internet and improving the health of the public conversation. We support a forward-looking approach to regulation that protects the Open Internet, freedom of expression and fair competition in the internet sector. “Openness and transparency is central to Twitter’s approach, as embodied by our public API, our information operations archive, our commitment to user choice, our decision to ban political advertising and label content to provide more context and information, and our disclosures in the Twitter Transparency Report. Twitter, however, is another platform that attracts multiple issues to raise chaos. Many of the times, it is observed that political figures or social influencers have tried to have a playful banter, which would lead to chaotic outcomes; court hearings or misleading documentation.

YouTube releases a transparency report, which gives data on its removals of inappropriate content. The video-sharing site owned by Google said that 8.8m videos were taken down between July and September 2019, with 93% of them automatically removed by machines, and two thirds of those clips not receiving a single view. It also removed 3.3 million channels and 517 million comments.

Globally, YouTube employs 10,000 people in monitoring and removing content, as well as policy development. YouTube had undergone a massive boycott from Muslim states, when multiple videos disrespecting Islamic values and beliefs. These problems caused havoc for the platform, as a mass influx of such content continued to grow and it damaged the reputation of YouTube within the Muslim community.

Thus, it is imperative for social media platforms to compose a set of rules and regulations that are operative and changing, aligned with the issues around the globe. Social media platforms have been suggested to implement multiple measures to deteriorate the problems occurring. Such as, “circuit breakers” that fact-checks newly viral content before it spreads. It forces social networks and influencers to disclose why content has been recommended and limiting the use of micro-targeting advertising messages. making it illegal to exclude people from content on the basis of race or religion, such as hiding a spare room advert from people of color. Lastly, banning the use of so-called dark patterns – user interfaces designed to confuse or frustrate the user, such as making it hard to delete your account.

Conclusively, what is recognized is that the growth of content and consumers works side by side. Every platform that engages the two must be actively monitoring and observing damaging behavior in order to completely mitigate the problem at grass root level, so that the platform develops a safe environment for everyone to use, without any form of indecency or inappropriate content and behavior. As the society grows, it is becoming relatively hard to compose and control these factors, therefore the accessibility of these platforms should be limited to a community that would be accepting and understanding of each other.

Posts Carousel

Leave a Comment

Your email address will not be published. Required fields are marked with *

Latest Posts

Top Authors

Most Commented

Featured Videos