What's happening?\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nThe invention of AI-Tech raises red flags in all fields, including journalism, and synthetic media is making matters worse. This guide centers on the definition of synthetic media, techniques, and most shockingly, red flags to Journalism.\r\nWhy it matters:\r\nThe various forms of data that make-up news content are on the verge of duplicity, as synthetic media \u2013an algorithm that can manipulate texts, images, and audiovisuals \u2013 is currently available to those who seek it.\r\n\r\nWith this AI-based model, \u2018it\u2019s possible to create \u2018faces and places that don\u2019t exist and even create a digital voice avatar that mimics human speech.\u2019(Aldana Vales\u00a02019)\r\n\r\nImagine a world, where it\u2019s quite difficult to differentiate between fake and real news, since the disseminators of fake news can alter \u2018evidence\u2019 to suit their agenda. For instance; no one would cease to believe that World War III has begun, if videos of Trump, Putin, and Kim declaring war were globally circulated online. Though such news might be debunked by the governments involved, the psychological and economic panic it would cause might be greater than the effect of a missile.\r\nDigging Deeper\r\nSynthetic media can be created via three forms of generative artificial intelligence, namely; Generative Adversarial Networks (GAN), Variational Autoencoders, and Recurrent Neural Networks. These aforementioned G.A.I\u2019s are used for photo, video, and text generation respectively. The word generation is used because most of the media contents created with these algorithms don\u2019t exist; however, Synthetic media can also be used for duplication.\r\n\r\nAccording to Aldana Vales, \u2018Generative Adversarial Networks use two neural networks (a neural network is a computing system that can predict and model complex relationships and patterns) that compete against each other.\u2019\r\n\r\nThe first and second networks act as a generator and a discriminator individually. The discriminator supervises the generator, ensuring no stone is left unturned. After some \u2018to and fro\u2019 revisions by the duo, the content produced would resemble the original.\r\n\r\nUnlike Generative Adversarial Networks, neural networks in Variational Autoencoders are called encoder and decoder, since the technique involves compression and reconstruction of video content. \u2018The decoder includes probability modeling that identifies likely differences between the two so it can reconstruct elements that would otherwise get lost through the encoding-decoding process.\u2019 (Aldana Vales 2019)\r\n\r\nRecurrent neural networks function by \u2018recognizing the structure on a large set of text\u2019. This is the method used in text autocorrect phone apk.\r\n\r\nThese techniques are applied in various projects such as; GauGAN,\u00a0Face2Face, and GPT-2 model. The most recent application of Synthetic media can be found in Siri or Alexa. These virtual assistants now have the ability to \u2018turn text into audio and mimic human speech\u2019.\r\n\r\nIn a 2017 article, titled \u2018AI-Assisted porn is here and we\u2019re all fucked\u2019, Vice exposed the circulation of a fake porn video, which isn\u2019t a problem because most plots portrayed in porn movies are fake (LoL); except that the actor had the face of a popular non-pornographic actress, Gal Gadot (Wonder woman). Also, in 2018, \u2018a video showing President Barack Obama\u00a0talking about the risks of manipulated videos\u2019 was circulated on Buzzfeed. The weird thing about this video is that the AI-Generated subject has Obama face and Jordan Peele\u2019s voice, thanks to Synthetic media.\r\n\r\nThere\u2019s an ongoing campaign against the potential harm of Synthetic media on news authenticity; however, \u2018Beyond reporting...newsrooms are focusing on synthetic media detection and validating information. The Wall Street Journal, for instance, created a\u00a0newsroom guide\u00a0and\u00a0committee\u00a0to detect deepfakes. The New York Times recently\u00a0announced\u00a0that is exploring a blockchain-based system to fight misinformation online.\u2019 (Aldana Vales 2019)\r\nBottom line\r\nSynthetic media could help news agencies break language barrier seamlessly. Also, it could encourage the circulation of fake news. While It is impossible to stop giant tech companies from diving into AI-Tech research, journalists can learn how to control the damage posed by synthetic media.