In my previous article, I covered the basics of ChatGPT. Cyber experts have opined that ChatGPT gives new ammunition to cybercriminals to intensify their attacks. An interesting article from Harvard Business Review states that ChatGPT opens up new avenues of attack for hackers. Are we exaggerating or is this really worrisome?
Sophisticated phishing and social engineering attacks: A general tip offered to people to protect themselves from phishing attacks has been to look for grammatical mistakes or spelling errors in emails they receive from phishers. ChatGPT’s ability to generate flawless communications eliminates such errors. Communications generated from legible companies would have gone through numerous iterations within their communications teams to get to the professional level of writing, but ChatGPT can produce such a proficient level of communication within a few seconds. This doesn’t need any communication or English language skills for one to generate such communication.
This will eliminate an option for the general public to separate phishing from a real email from a legible sender. Agree that this is not good news for you and me, but the fact remains the same we need to be even more careful now. Basic rules of protecting us remain the same. For example, it should be triggering a red flag to you if your banker asks Permanent Account Number (PAN) via email. Even if that is a legible email, you have the option to visit your nearest branch to check the authenticity of such asks.
Social engineering attacks leverage our psychological weaknesses to gain control of our computer system via manipulation, deception, and influence. The elegant way ChatGPT creates content including creative ones, the view from experts is that such content can be used for increased Social engineering attacks.
Increased awareness is the only weapon we have against such attacks. However intensified criminal attacks, if we follow general tips not to fall into their fray, we should be safe as always. Please refer to my prior article on How to Prevent Identity theft to see a few tips on this matter.
New Malware: There is a theory that cybercriminals can leverage ChatGPT’s ability to provide readymade code to write even more sophisticated malware in shorter period of time. Although ChatGPT doesn’t provide code if asked to supply code for illicit purposes, the view is that there is a way to manipulate ChatGPT in providing code for creating malware.
Don’t disagree with such possibilities, but we need to remember that criminal hackers are always geeks in creating malware. Why would they need ChatGPT to guide them to create one or supply readymade code?
We can go on hatching conspiracy theories, but prevention is better than cure, i.e. keep our systems up-to-date with the latest patches to protect the best we can.
Spreading fake news: Recently, a person was arrested in China for allegedly spreading fake news using ChatGPT. The suspect seems to have generated a fake report of a train crash with the intention of making an illicit profit. With its ability to create content, ChatGPT would have made it easy, but fake news or bomb hoaxes are not new. It is an inherent human curiosity to hear something dramatic, which paves the way for such occurrences rather than anything else. As far as there are people forwarding fake news in their WhatsApp without thinking twice or people reaching such dramatic content, don’t think they stop even with stringent regulations in various countries.
Conclusion: More than a cyber threat, I think ChatGPT is a bigger phycological threat. Our new generation is already reeling from addiction to smartphones, which changed the whole of the social interactive dynamics. We see people living in a virtual social networking world rather than the real world. With ChatGPT, even more, powerful giving a sense of “there is someone there” chatting with me could bring another wave of socio-psychic changes in our society. I see this as a bigger threat than a cyber threat.