Facebook released an extensive report it calls Community Standards Enforcement Report for its Facebook and Instagram platforms. The call comes from a rise in misinformation or ‘Fake news’, which is proving to create huge social problems with global issues like Covid-19.
Covid-19 is a classic example. As the pandemic became breaking news, people were thirsty for cause and cures only to find some rather weird and wacky information. AI can take this information, check it with credible sources and then flag the content for possible human review.
Both platforms allow free thinking and discussion between users. However with the phenomenal growth comes a huge increase in its misuse. Facebook has long since used technology to analyse the usage of its platforms. It has now adapted the technology to be more intelligent, cross analysing the data to reduce:
- False information
- Bullying and harassment
- Dangerous organisations
- Hate speech
- Lots more….
The AI algorithm is a held secret, but this is how it works:
Data (text, picture, video) in > Algorithm > Check with 3rd party > Publish or flag
My key thoughts
- Freedom of speech is so important however, with the shear volume of content other methods need to be explored.
- AI is already revolutionising content control today. In the future, the technology will allow filtering to be more precise and controllable.
- AI algorithms will only be as good as the parameters that are set. This should be monitored an tweaked over time.
- Automated content generation will require automated AI checking algorithms for speed.
- Great use of AI technology especially with the sheer volume of content being created it would be physically impossible to do by humans.
Picture – Simon Steinberger
Please contact me for more information on AI technology and applications.