Are you worried about AI collecting your facial data from all the pictures you have ever posted or shared? Researchers have now developed a method for hindering facial recognition.
It is a commonly accepted fact nowadays that the images we post or share online can and might find themselves being used by third parties for one reason or another. It may not be something we truly agree with, but it's a fact that most of us have accepted as an undesirable consequence of using freely available social media apps and websites.
To avoid this happening, a team of researchers from the University of Chicago have developed an algorithm, named "Fawkes," as an ode to Guy Fawkes, that works in the background to slightly alter your image, which is mostly unnoticeable to human eye. The reason for this is that companies, such as, Clearview, which collect large amounts of facial data, use artificial intelligence to find and connect one photograph of one's face to another photograph from elsewhere. This connection is found by linking the similarities between the two photos. However, it doesn't mean that the recognition only occurs when identical facial symmetry or characteristics, such as moles, are found. Facial recognition also looks into "invisible relationships between the pixels that make up a computer-generated picture of that face."
This is where "Fawkes" comes into play. It swaps out or distorts certain pixels of your image to make it hard or impossible for facial recognition to connect it to other images. For the user, the image looks the same as before; however, for the facial recognition software, it becomes a completely unique image that does not have a match elsewhere and thus cannot be linked and recognized. The research team reported that this "cloaking" technique had a 100% success rate in fooling facial recognition systems that are used by Microsoft, Amazon, and Google. Previously, it was reported that the tool did make visually noticeable changes to some of the images, but it has been improved since.
This algorithm is accessible to public for free, and you can test it out yourself. Although currently successful, it's only a temporary measure to hinder the efforts of "maintaining accurate models for large-scale facial recognition." It is likely that the systems will become more advanced in the future, but if there's a freely available method that at least puts a slight brake onto the business of collecting facial data, it's definitely worth using.
Looks like they wanted to support the friends in Asia against a hegemonial regime. At least a little. Thumb up.