Deepfakes need to be stopped
February 17, 2020
Whether it is browsing social media, YouTube or the news, there is one one thing you will be sure to come across: the media. The media is normally a very reliable source of information, but extremely biased channels and fake news have made it less dependable. Now we have a new problem to add into the mix—deep fakes. Deep fake is a fairly new technique created in 2017 that uses the combination of AI human image synthesis existing images and videos to create fake visual or audio recordings of a person. However, if abused, this technology can destroy a person’s reputation.
Most people have at least heard about the idea of fake news, but deep fake is not as publicized, even so it is a dangerous tool. Senior Cameron Longabaugh said, “I think as time goes on technology is going to evolve to the point that it is harder to tell the difference between someone deep faking a face and an actual video.” The longer we wait to find a solution, the worse the situation will become. Therefore, we must try to find a way to prevent the onset of deep fakes before it’s too late.
In fact, deep fakes have already permeated the media. With the presidential elections coming up, many politicians are taking a stand to voice their opinions. But, unless the video is live footage, there is a possibility that their speech could have been digitally changed to imply something completely different. Senior Madeline Borger said, “It actually matters in what instance deep faking is used because if it is used in an election campaign video where it portrays a candidate saying something that is not in accordance with their platform, it could be defamation.”
This sort of defamation has already occurred, according to International Business Times, a video posted on Twitter about Joe Biden making racist remarks is confirmed to be fake. Even though it was ultimately shut down as false, this incident has long term implications. It impacts the credibility of the U.S., and other countries may find a vulnerability in our media system. This could lead to international cyberattacks with more deep fakes permeating the U.S. media and spreading propaganda.
If this problem is not addressed soon, it could spiral out of control. Border said, “It would be imperative to develop that technology in the media sphere so the public can actually trust their government.” There are some defamation laws that punish the person posting the deep fake, but that doesn’t change the fact that millions of people have already viewed the false information. Punishment isn’t enough; prevention is key for protecting the public.
The government doesn’t have the resources verify every single video published is credible, so it is up to private companies to ensure that the information on their platforms is real. Companies need to realize the detrimental effects that deep fakes can have on their reputation and start developing technology that can pinpoint object features, which are tell-tale signs like a very smooth face or less blinking, that indicate a video is most likely a deep fake and take it down quickly.
Until then, there are the conventional reliable source method that is taught in many schools. Computer Science teacher Chamara Wijeratne said, “I wouldn’t take everything on the news or social media as 100 percent true and don’t jump to conclusions quickly. Cross check your sources and refer to multiple news outlets.” This is probably one of the best ways to know if a video is real or not. If it is too outrageous or sensational to be true, then it probably is not.
In this day and age, the media is incredibly valuable to the public. “The media is there to give information but if that information is false then they will be discredited and there will need to be alternative ways to get information,” Mr. Wijeratne said. So until some sort of technology is invented to catch these deep fakes tricking people in the act, be sure to double check if there are proper sources or any signs that a video you are watching is the real deal.