Skip to main content

GAN of worms

Generative adversarial networks are a machine learning method where two neural networks are competing against each other. The generative network tries to create an output based on the initial data and the discriminator network tries to detect if given input is real or generated by it's adversary. Repeating this process while feeding the results back to the network keeps improving both networks and in turn improving the quality of generated content.

Deep fakes are already a thing. At a quick glance they can easily pass as real. Only a more closer investigation might reveal some small anomalies that tell us there is something wrong.

We are in the verge of reaching the point where we humans are not able to tell the difference between the real results and the generated one.

Can this achievement of the machine already count as a sort of singularity. Those algorithms might not qualify for what we define intelligence. But on the other hand they are both better at creating content and smarter to detect the faces than us.

So called artificial intelligence doesn't have to be "intelligent" to cause harm. What will happen to our trust in information when everything could be just something random generated by a piece of silicon?