Tech News : Meta Hunting Celeb-Scams

Meta, the parent company of Facebook and Instagram, has revealed a new plan to combat the growing number of fake investment scheme celebrity scam ads by using facial recognition technology to weed them out. 

What’s the Problem? 

Fake ads featuring celebrities, known as “celeb-bait” scams by Meta, have become a plague on social media platforms in recent years, particularly ads promoting fraudulent investments, cryptocurrency schemes, or fake product endorsements. These scams use unauthorised images and fabricated comments from popular figures like Elon Musk, financial expert Martin Lewis, and Australian billionaire Gina Rinehart to lure users into clicking through to fraudulent websites, where they are often asked to share personal information or make payments under false pretences. 

Also, deepfakes have been created using artificial intelligence to superimpose celebrities’ faces onto endorsement videos, producing highly realistic content that even seasoned internet users may find convincing. For example, Martin Lewis, founder of MoneySavingExpert and a frequent victim of such scams, recently told BBC Radio 4’s Today programme that he receives “countless” notifications about fake ads using his image, sharing that he feels “sick” over how they deceive unsuspecting audiences. 

How Big Is the Problem? 

The prevalence of scams featuring celebrity endorsements has skyrocketed, reflecting a global trend in online fraud. In the UK alone, the Financial Conduct Authority (FCA) reported that celebrity-related scams have doubled since 2021, with these frauds costing British consumers more than £100 million annually. According to a recent study by the Fraud Advisory Panel, financial scams leveraging celebrity endorsements rose by 30 per cent in 2022 alone, a trend fuelled by increasingly sophisticated deepfake technology that makes these scams more believable than ever. 

Not Just the UK 

The impact of celeb-bait scams is even more significant worldwide. In Australia, for instance, the Australian Competition and Consumer Commission (ACCC) reported that online scams, many featuring unauthorised celebrity endorsements, cost consumers an estimated AUD 2 billion in 2023. Social media platforms, particularly Facebook and Instagram, are frequent targets for these fraudulent ads, as scammers exploit their large audiences to reach thousands of potential victims within minutes. 

The US has also seen similar issues, with the Federal Trade Commission (FTC) noting that more than $1 billion was lost to social media fraud in 2022 alone, a figure that has increased fivefold since 2019. Fake celebrity endorsements accounted for a large proportion of these losses, with reports indicating that over 40 per cent of people who experienced fraud in the past year encountered it on a social media platform. 

Identify and Block Using Facial Recognition 

In a Meta blog post about how the tech giant is testing new ways to combat scams on its platforms (Facebook and Instagram), and especially celeb-bait scams, Meta stated: “We’re testing the use of facial recognition technology.” 

According to Meta, this new approach will identify and block such ads before they reach users, offering a stronger line of defence in the ongoing battle against online scammers. The approach represents one of Meta’s most proactive attempts yet to address a persistent problem that has impacted both high-profile public figures and unsuspecting social media users alike. 

How Will Meta’s Facial Recognition Work? 

Meta’s facial recognition ad-blocking approach will build on its existing AI ad review systems, which scan for potentially fraudulent or policy-violating ads, but will introduce an additional layer of facial recognition that will work to verify the identities of celebrities in the ads. If an ad appears suspicious and contains the image of a public figure, Meta’s system will compare the individual’s face in the ad to their official Facebook or Instagram profile pictures. When a match is confirmed, and the ad is verified as a scam, Meta’s technology will delete the ad in real-time. 

David Agranovich, Meta’s Director of Global Threat Disruption, emphasised the importance of this shift in a recent press briefing, saying: “This process is done in real-time and is faster and much more accurate than manual human reviews, so it allows us to apply our enforcement policies more quickly and protect people on our apps from scams and celebrities.” Agranovich noted that the system has yielded “promising results” in early tests with a select group of 50,000 celebrities and public figures, who will be able to opt out of this enrolment at any time. 

According to Agranovich, the swift, automated nature of the system is critical to staying ahead of scammers, who often adapt their techniques as detection methods improve. The facial recognition system is not only intended to remove existing scam ads but to prevent them from spreading before they can reach a wide audience. Agranovich has highlighted how a rapid response of this kind is essential in a digital landscape where even a brief exposure to these ads can lead to significant financial losses for unsuspecting victims. 

When? 

This new measure is set to begin its rollout in December 2024. 

Meta’s Track Record and Renewed Focus on Privacy 

It’s worth noting, however, that Meta’s deployment of facial recognition technology marks a return to a tool it abandoned in 2021 amid concerns over privacy, accuracy, and potential biases in AI systems. Previously, Facebook used facial recognition for suggested photo tags, a feature that drew criticism and prompted the company to step back from the technology. This time, Meta says it has implemented additional safeguards to address such concerns, including the immediate deletion of facial data generated through the scam ad detection process. 

Privacy 

Privacy remains a contentious issue with facial recognition technology. Addressing privacy concerns over its new approach, Meta has stated that the data generated in making the comparison will be stored securely and encrypted, never becoming visible to other users or even to the account owner themselves. As Meta’s Agranovich says, “Any facial data generated from these ads is deleted immediately after the match test, regardless of the result.” Meta is keen to highlight how it intends to use the facial recognition technology purely for combating celeb-bait scams and aiding account recovery. In cases of account recovery, users will be asked to submit a video selfie, which Meta’s system will then compare to the profile image associated with the account. This verification method is expected to be faster and more secure than traditional identity confirmation methods, such as uploading an official ID document. 

Scaling the Solution and Potential Regulatory Hurdles 

Meta’s new system is set to be tested widely among a larger group of public figures in the coming months. Celebrities enrolled in the programme will receive in-app notifications and, if desired, can opt out at any time using the Accounts Centre. This large-scale trial comes as Meta faces increasing pressure from regulators, particularly in countries like Australia and the UK, where public outcry against celeb-bait scams has surged. The Australian Competition and Consumer Commission (ACCC) is currently engaged in a legal dispute with Meta over its perceived failure to stop scam ads, while mining magnate Andrew Forrest has also filed a lawsuit against the company for allegedly enabling fraudsters to misuse his image. 

Martin Lewis Sued Facebook 

In the UK, personal finance guru Martin Lewis previously sued Facebook for allowing fake ads featuring his image, ultimately reaching a settlement in which Meta agreed to fund a £3 million scam prevention initiative through Citizens Advice. Nevertheless, Lewis continues to push for stronger regulations, recently urging the UK government to empower Ofcom with additional regulatory authority to combat scam ads. “These scams are not only deceptive but damaging to the reputations of the individuals featured in them,” Lewis stated, highlighting the broader impact that celeb-bait scams have beyond financial loss. 

Despite the New Tech, It’s Still ‘A Numbers Game’ 

Despite Meta’s new approach, the company still faces a huge challenge. For example, Agranovich has admitted that, despite robust safeguards, some scams will still evade detection, saying, “It’s a numbers game,” and that, “While we have automated detection systems that run against ad creative that’s being created, scam networks are highly motivated to keep throwing things at the wall in hopes that something gets through.” As scam networks find new ways to bypass detection, Meta acknowledges that the technology will require continuous adaptation and improvement to keep up. 

What About Concerns Over AI and Bias? 

In deploying facial recognition technology, Meta has also faced scrutiny over potential biases in AI and facial recognition systems, which have been shown to have variable accuracy across different demographics. The company claims that extensive testing and review have been undertaken to minimise such biases. Also, Meta has said it will not roll out the technology in regions where it lacks regulatory approval, such as in the UK and EU, indicating a cautious approach towards compliance and accountability. 

Meta says it has “vetted these measures through our robust privacy and risk review process” and is committed to “sharing our approach to inform the industry’s defences against online scammers.” The company has also pledged to engage with regulators, policymakers, and industry experts to address ongoing challenges and align on best practices for facial recognition technology’s ethical use. 

What Does This Mean for Your Business? 

Meta’s latest move to integrate facial recognition technology into its anti-scam measures signals a significant shift toward tackling the complex world of celeb-bait scams. However, as Meta ventures back into using facial recognition, it’s clear the company must balance robust security with privacy, a concern that continues to shadow the rollout. While the technology holds promise, particularly in increasing detection speed and reducing the frequency of celebrity scams, it will undoubtedly be scrutinised by both users and regulators who have long questioned the use of facial recognition on such a broad scale. 

For everyday Facebook and Instagram users, Meta’s new facial recognition feature could mean greater security and fewer encounters with fake ads that exploit public figures for fraudulent schemes. If successful, the initiative could lessen the risk of users falling victim to scams that impersonate well-known personalities to promote fake investments or products. The added layer of facial recognition should serve as a safeguard, reducing the frequency of these fake ads in users’ feeds and building a safer browsing experience across Meta’s platforms. 

For celebrities and public figures, this development is a significant step towards reclaiming control over their public images, which are often misused without permission. The new system will help protect their reputations, preventing unauthorised use of their likenesses in fraudulent ads. Figures like Martin Lewis, who has been vocal about the damage these scams cause, could benefit as Meta finally implements more targeted measures to shield them from unauthorised endorsements. 

The impact of this initiative may extend to legitimate advertisers as well. Meta’s crackdown on celeb-bait scams will likely improve ad integrity on its platforms, helping businesses that rely on Facebook and Instagram to reach audiences without the risk of association with deceptive content. A cleaner, more trustworthy advertising environment could enhance user trust and, in turn, benefit brands that promote genuine products and services. As Meta focuses on strengthening its ad review systems, legitimate advertisers may find their content reaching more engaged, security-conscious users who are less wary of the ads they encounter online. In this way, Meta’s facial recognition technology could not only shield users and celebrities from scams but also foster a more secure, credible marketplace for businesses across its platforms.

Leave a Comment

Your email address will not be published. Required fields are marked *