Cloud computing and neural networks are absolutely everywhere, from the searches on your smart device returning applicable possible results for queries to targeted ads on smart TVs and even at sports matches and concerts, your data is very important for a myriad of reasons and all goes toward a self-learning and self-aware way of objectives for machine coding.

There are advertising networks and ‘suggestions’ from websites like eBay and Amazon based on your previous search criteria and purchases, the AI systems learn from you then go on to become more powerful and accurate, in the dark ages this form of advance behaviour from these clever machines would have been called witchcraft or you would have been accused of being a heretic but these days we accept this foreboding behaviour as being quite normal now as well as a way of life we have all come to accept.

There are some traps that all of us can fall into in this way of thinking and ongoing behaviour from the neural and computing networks that never cease to stop analysing our behaviours, traits and habits. Mainly these systems are for financial gain from our data submitted willingly or not, either way, we do need to protect ourselves from these sometimes intrusive systems.

Recently on the Facebook social media platform, there has been a trending ‘ten year challenge’, the objective is to post a picture of yourself from ten years ago and then a very recent one. Those who took part saw this as a funny and interesting thing to do but probably did not think of the longer-term implications with the background AI and neural networks that are constantly running and analysing uploaded content.

By this, I mean that the facial recognition software and the associated applications either organised by Facebook or in most probability other agencies would have actively monitored and scanned the two uploaded photos of individuals and then cross-referenced them picking out key features between the images, in this way the AI and neural networks are able to predict changes in the way peoples features change over a set period, perhaps the scanning systems searched back through profiles for even earlier photos to create a near-perfect picture of the set of changes.

Facebook just announced that it will be removing videos modified by AI known as deepfakes from its social media platform, these are computer-generated images which are created to appear very real and legitimate.

In a blog, the company said that these videos are placed to distort reality and pose a “significant challenge” for the technology industry. Deepfakes are not so common but are building in numbers and have been used to steer political campaigns as well as changing views with celebrities with on the fly processing often merging, replacing or superimposing content in a way to make it appear real.

The odd thing is that Facebook and its subsidary companies are using AI to detect images and videos that are not real, it is hoping to filter out those via this method as well as human interactions but this could be a lost cause with AI trying out trick and out wit AI systems, when will it stop, maybe there is a point that the battle will be over and this trickery will be too powerful to detect with normal means and ways.

Facebook announced it has invested $10 million to fund an improve deepfake detection abilities but often with technology the more detection that is created the more clever the systems become and often it can be seen and turn out as a lost cause.

There is really no stopping these systems that are both for and against detection and so it is probably wiser to just not submit personal images as you do not really know what the possible outcome or uses of these may be. Not only with data protection we now need to be aware of our own images and voices being used for methods we are only just beginning to realise and understand.