A proposal to enable citizens to exert governance over those that abuse the social media platforms.
For the past few decades we have all been part of an enormous, worldwide sociology experiment. And the results are not good. Outside the technology sector many products are subject to regulatory control, primarily for public safety considerations. Think of pharmaceuticals, automotive, medical devices, food products and many more. Similarly, in the high-tech business sector many standards have been developed by collaborations between leading tech firms who have supported the creation of standards organizations such as the OMG, DTMF, W3C, The Open Group and many more. Without these our modern tech-enabled world would simply not work! In contrast the social media tech sector has been like the wild west. While there has been considerable attention to data security and privacy, there has been literally no consideration of the effects of universal availability and abuse of social media on our society.
It was thirty-five years ago when I first observed the syndrome of the keyboard warrior in the business environment. It was quite a shock, when seemingly reliable, intelligent, often senior individuals sent vitriolic or abusive mail messages over the early Internet that were completely out of character. Often the message was an impulsive, angry response to a previous message that frequently resulted in further angry exchanges and face to face arguments. In business environments today keyboard warriors are quite rare. But in the social media environment in general there is much less self-control. Further the perceived anonymity of social media has encouraged widespread distancing from the truth. Many people feel unencumbered by societal norms of truthfulness and over time social media practice has merged with reality.
Over the past few years, we have observed high profile public figures who display utter contempt for truth. As discussed in my last post, no lesser figure than the US President is now a case study for this convention breaking behaviour. And while Donald Trump didn’t invent this behavioural pattern, he has certainly demonstrated a talent for not only using it but encouraging huge numbers of people to believe the unbelievable and for many, many others to follow suit. As I said in my last post, we must now recognize that technology has facilitated this problem and it’s time to fix it.
Inevitably the leading platforms come in for considerable criticism from all sides because of the divisive nature of US politics at present. Facebook and Twitter have implemented some levels of governance themselves, and while most of their efforts have been focused on those publishing abusive images, Twitter to their credit have flagged numerous Trump tweets that are barefaced lies. But this is woefully inadequate.
As discussed the tech companies have been leaders in developing standards. As new technologies mature there is often the pressing need for common approaches. Frequently standards bodies are formed by the tech companies themselves, and often the standards are evolved from existing practices and technologies that have proven effective. And in this area of “truth governance” it would seem that Facebook and Twitter in particular might cooperate to everyone’s advantage.
My proposal is very simple. Every tweet or post should have an additional button labelled TRUE/FALSE. This would be similar to LIKE. Supporting the TRUE/FALSE button an artificial intelligence (AI) engine should monitor, not whether something is true or false, that would be too difficult at this stage, rather monitor the individuals who register the truth status and detect probability on the basis of individuals prior assessments and swarm behavior. This last type of analysis would aim to detect when there were deliberate efforts to coordinate and or misrepresent genuine assessments. On the basis of collective assessments, the platforms should implement a common penalty system along the lines of sports penalties. Yellow card for early warning; red card for serious breach. Behind the scenes a standards body funded by the tech companies should develop the AI truth engine and employ mediators, similar to Wiki Administrators – trusted users with access to certain functions not available to other users, for example the ability to delete pages and block posts and users.
We mustn’t expect all social media platforms to comply with the standards. There are platforms such as Parler that allows users to “speak freely and express yourself openly, without fear of being ‘deplatformed’ for your views”. While we might expect certain users to gravitate over time to lower or ungoverned platforms, the advocates of “free speech” and conspiracy theories will continue to undermine the leading, governed platforms. As AI based governance engines become more sophisticated there will be some migration to ungoverned platforms. We can only hope these will become very niche over time.
During the past over four years we have all become accustomed to lies and fake news from the Trump campaign. Perhaps even Trump himself believed he could manufacture a fantastical conspiracy theory underpinning massive electoral fraud, and use it to annul the election. Yet, apart from the most hypnotized followers in his base, the majority of people will now see Trump as liar in chief. We might play Trump at his own game and colloquially label the TRUE/FALSE button as the “Trump” button, and use the verb “to Trump”. It’s only fitting that we remember him in the right context.
The entire social media environment has become like the wild west. Anyone can say anything and see wild ideas and fantastical theories become instantly circulated and believed. And while the US presidential election looks likely to be resolved, there must be widespread disillusionment with democracy. Perhaps the time has come when, after the world has seen to its horror how things can go so wrong, that there’s some willingness to walk back from the cliff edge.