A shocking incident has shaken the global politics and cyber security world. Let us tell you that with the help of Artificial Intelligence (AI), the voice of US Secretary of State Marco Rubio was cheated and talked to many people. Through this fake dialogue, efforts were made to gather sensitive information and establish dialogue in the wrong direction. The incident has presented a new and serious challenge related to AI not only to the US, but the governments and security agencies around the world.
According to the reports received, with the help of AI based voice cloning technology, an unknown group prepared the voice of Marco Rubio and interacted with various government officials, think tank members and international reporters. During this time, those persons were confused that they are actually talking to the US Secretary of State. This fraud was not only an attempt to harm American diplomacy, but also shows that voice -based identity is no longer reliable.
Let us tell you that Voice cloning technology was limited to laboratories only, but now it has become accessible. With the help of open-source tools and deepfac AI models, it has become possible to copy anyone’s voice. A major threat associated with this arises when this technique is used against high-profile individuals such as politicians, military officers or CEOs of big companies.
If seen, the purpose of this incident can be two -way. One is to gather information through fake conversations and to tarnish the image of a country or institution or to create policy illusion. This phenomenon indicates that the ‘Information Warfare’ globally is no longer limited to text or video, but now the era of ‘Voice Diplomacy Hedcing’ has also begun.
Let us also tell you that after this incident, agencies like FBI, NSA and US Cyber Command have started investigating it. The question is also what was the source of such technology – was it a handiwork of an enemy nation or was a private cyber gang involved in this experiment?
On the other hand, in his response to the incident, Rubio said that he always uses authorized and official channels while communicating with all his foreign counterparts to avoid any kind of fraudulent contact. He said that this is the reality of the 21st century, where it has become common to make fake things through AI. ” Rubio warned that this is not just an incident – “Such incidents will happen in future because only recording of any person’s voice is sufficient.” He believes that such a goal is chosen to “trick people” so that their information can be obtained. Let us tell you that after this incident, the US State Ministry has instructed its employees to be conscious of fake voices and reports of cyber attacks including any suspicious contact. It is being told that in the month of May, FBI issued a warning on the “Vishing” and “Smishing” attacks based on voice and text, in which the possibility of cheating incidents was expected by copying the identity of senior officials.
On the other hand, if we look at this incident in the context of India, it is a serious warning for a democratic and rapid digital country like us. If such fake voices enter the electoral process, military strategy or public dialogue, the risk of misunderstandings and social disintegration may increase.
Talking about the possible solution to such dangers, let us tell you that a voice authentication protocol will have to be made for this. New digital signatures of voice verification should be developed in any dialogue. Also, clear and strict laws should be enacted against voice cloning and deepfack. In addition, ordinary citizens should be aware of potential frauds arising from AI.
However, copying and misuse from the AI of Marco Rubio’s voice is not just a technical problem, but a global security challenge. This incident is a warning that now it cannot be identified only by face or voice. In the coming time, we have to give challenging answers to technology, only then we can be safe in the digital age.