Rephrase the title:McAfee unveils Project Mockingbird to stop AI voice clone scams

AI

Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.: Do you want to get the latest gaming industry news straight to your inbox? Sign up for our daily and weekly newsletters here. McAfee has introduced Project Mockingbird as a way to detect AI-generated deepfakes that use audio to scam consumers with fake news and other schemes. In a bid to combat the escalating threat posed by AI-generated scams, McAfee created its AI-powered Deepfake Audio Detection technology, dubbed Project Mockingbird. Unveiled at CES 2024, the big tech trade show in Las Vegas, this innovative technology aims to shield consumers from cybercriminals wielding manipulated, AI-generated audio to perpetrate scams and manipulate public perception. In these scams, such as with the video attached, scammers will start a video with an legit speaker such as a well-known newscaster. But then it will take fake material and have the speaker utter words that the human speaker never actually said. It’s deepfake, with both audio and video, said Steve Grobman, CTO of McAfee, in an interview with VentureBeat. VB Event The AI Impact Tour Getting to an AI Governance Blueprint – Request an invite for the Jan 10 event.   Learn More “McAfee has been all about protecting consumers from the threats that impact their digital lives. We’ve done that forever, traditionally, around detecting malware and preventing people from going to dangerous websites,” Grobman said. “Clearly, with generative AI, we’re starting to see a very rapid pivot to cybercriminals, bad actors, using generative AI to build a wide range of scams.” He added, “As we move forward into the election cycle, we fully expect there to be use of generative AI in a number of forms for disinformation, as well as legitimate political campaign content generation. So, because of that, over the last couple of years, McAfee has really increased our investment in how we make sure that we have the right technology that will be able to go into our various products and backend technologies that can detect these capabilities that will then be able to be used by our customers to make more informed decisions on whether a video is authentic, whether it’s something they want to trust, whether it’s something that they need to be more cautious around.” If used in conjunction with other hacked material, the deepfakes could easily fool people. For instance, Insomniac Games, the maker of Spider-Man 2, was hacked and had its private data put out onto the web. Among the so-called legit material could be deepfake content that would be hard to discern from the real hacked material from the victim company. “What what we’re going to be announcing at CES is really our first public sets of demonstrations of some of our newer technologies that we built,” Grobman said. “We’re working across all domains. So we’re working on technology for image detection, video detection, text detection. One that we’ve put a lot of investment into recently is deep fake audio. And one of the reasons is if you think about an adversary creating fake content, there’s a lot of optionality to use all sorts of video that isn’t necessarily the person that the audio is coming from. There’s the classic deepfake, where you have somebody talking, and the video and audio are synchronized. But there’s a lot of opportunity to have the audio track on top of the roll or on top of other video when there’s other video in the picture that is not the narrator.” Project Mockingbird Project Mockingbird detects whether the audio is truly the human person or not, based on listening to the words that are spoken. It’s a way to combat the concerning trend of using generative AI to create convincing deepfakes. Creating deepfakes of celebrities in porn videos has been a problem for a while, but most of those are confined to deepfake video sites. It’s relatively easy for consumers to avoid such scams. But with the deepfake audio tricks, the problem is more insidious, Grobman said. You can find plenty of these deepfake audio scams sitting in posts on social media, he said. He is particularly concerned about the rise of these deepfake audio scams in light of the coming 2024 U.S. Presidential election. The surge in AI advancements has facilitated cybercriminals in creating deceptive content, leading to a rise in scams that exploit manipulated audio and video. These deceptions range from voice cloning to impersonate loved ones soliciting money to manipulating authentic videos with altered audio, making it challenging for consumers to discern authenticity in the digital realm. Anticipating the pressing need for consumers to distinguish real from manipulated content, McAfee Labs developed an industry-leading AI model capable of detecting AI-generated audio. Project Mockingbird employs a blend of AI-powered contextual, behavioral, and categorical detection models, boasting an impressive accuracy rate of over 90% in identifying and safeguarding against maliciously altered audio in videos. Grobman said the tech to fight deepfakes is significant, likening it to a weather forecast that helps individuals make informed decisions in their digital engagements. Grobman asserted that McAfee’s new AI detection capabilities empower users to understand their digital landscape and gauge the authenticity of online content accurately. “The use cases for this AI detection technology are far-ranging and will prove invaluable to consumersamidst a rise in AI-generated scams and disinformation. With McAfee’s deepfake audio detectioncapabilities, we’ll be putting the power of knowing what is real or fake directly into the hands ofconsumers,” Grobman said. “We’ll help consumers avoid ‘cheapfake’ scams where a cloned celebrity is claiming a new limited-time giveaway, and also make sure consumers know instantaneously when watching a video about a presidential candidate, whether it’s real or AI-generated for malicious purposes. This takes protection in the age of AI to a whole new level. We aim to give users the clarity and confidence to navigate the nuances in our new AI-driven world, to protect their online privacy and identity, and well-being.” In terms of the cybercrime ecosystem, Grobman said that McAfee’s threat research team has found is the use of legitimate accounts that are registered for ad networks, in platforms like Meta as an example. McAfee found that such deepfakes are being posted in social media ad platforms like Facebook, Instagram, Threads, Messenger and other platforms. In one case, there was a legit church whose account was hijacked and the bad actors posted content with deepfake scams onto social media. “The target is often the consumer. The way that the bad actors are able to get to them is through some of the the soft target infrastructure of other organizations,” Grobman said. “We see this also on some of what’s being hosted once people fall for these deep fakes.” In a case involving a crypto scam video, the bad actors want to have a user download an app or register on a web site. “It’s putting all these pieces together, that creates a perfect storm,” he said. He said the cyber criminals are using the ad accounts of a church’s social media account, or a business’ social media account. And that’s how they’re disseminating the news. In an example Grobman called a “cheap fake,” it’s a legitimate video of a news broadcast. And some of the audio is real. But some of the audio has been replaced with deepfake audio in order to set up a crypto scam environment. A video from a credible source, in this case CNBC, starts talking about a new investment platform and then it’s hijacked to set up a scam to get users to go to a fake crypto exchange. As McAfee’s tech listens to the audio, it determines where the deepfake audio starts and it can flag the fake audio. “At the beginning, it was legitimate audio and video, then the graph shows where the fake portions are,” Grobman said. Grobman said the deepfake detection tech will get integrated into a product to protect users, who are already concerned about being exposed to deepfakes. And in this case, Grobman notes it is pretty hard to keep deepfake audio from…