By Ryuji Wolf
As AI overwhelms online spaces, proof of human is becoming essential for trust.
Before many Malaysians click “buy”, “book” or “order”, they scroll through reviews looking for genuine voices to guide their choices. A Universiti Putra Malaysia study found tha the authenticity of a review has the strongest influence on purchase intention, proving how deeply Malaysians rely on these opinions.
But this trust is now under pressure. Platforms are flooded with AI-generated reviews, paid click-farm content, and coordinated manipulation campaigns using fake accounts. Googel itself removed than 240 million policy-violating reviews in 2024, a 41.18 per cent jump from the previous year.
As these synthetic voices multiply, genuine reviews lose meaning, and consumers are left wondering: is this opinion real or just code? In a digital age where identity can be faked, only a robust method to prove humanness can restore the faith people place in reviews.
Manipulated reviews towards real-world consequences
Fake reviews now show up in multiple layers, with bots flooding platforms around the clock, throwaway accounts engineered solely to tilt ratings, and increasingly, AI-written reviews polished enough to pass as real.
And Malaysians are already paying the price. Research from Universiti Putra Malaysia found that fake reviews can temporarily inflate products ratings, but once consumers realise the truth, they feel misled and leave genuine negative feedback.This creates a cycle of disappointment, eroding confidence in platforms and brands.
The fallout doesn’t stop with consumers. Small businesses are getting hit just as hard. Recently, a small business owener was threatened by scammers who spammed her Google Maps listing with fake reviews and demanded payment to remove them. Authentic feedback gets buried, manipulated ratings rise to the top, and honest sellers lose customers who can no longer tell what’s genuine.
When the credibility of reviews collapses, both sides lose – the buyer who feels deceived, and the seller whose reputation becomes collateral damage.
The trust gap that platforms can’t close
The core problem is that today’s verification systems were built for a very different internet. Email sign-ups, phone verification, and simple CAPTCHAs worked when bots were basic, and account creation was slow; however, that assumption no longer holds.
Bots now bypass CAPTCHA with near-perfect accuracy, AI tools can generate convincing identities in seconds, and paid groups can produce hundreds of coordinated accounts with minimal effort. The result is an arms race that platforms are consistently losing.
Platforms conduct frequent sweeps to remove fake reviews, yet they still struggle to keep pace. The gap widens each time synthetic accounts slip through, because every fake review that survives chips away at the credibility of the entire platform. Over time, people stop trusting five-star ratings, question popular recommendations, and become sceptical of even genuine feedback.
Without a reliable way to confirm whether a review originates from a real individual, this trust gap only deepens, leaving platforms unable to reassure consumers or restore the confidence that once made online reviews meaningful, especially in an era where Malaysian consumers place trust at the centre of brand loyalty,
Restoring digital integrity with human verification
This is why verifying the presence of a real human behind an account is becoming essential infrastructure for digital platforms. If each review can be tied to an actual person, and even anonymously, it provides a much stronger foundation for authenticity.
World ID approaches this with a privacy-preserving verification system that allows people to confirm they are real and unique. The Orb captures a safe image of their face and eyes solely to establish humanness; the images are deleted immediately and stored only on the individual’s device. Zero-knowledge proofs then allow the person to signal “I’m a real human” without revealing any personal information.
In an era where AI can imitate human behaviour almost flawlessly, this form of verification offers a practical path to protect the integrity of reviews, and rebuild the trust consumers rely on to make decisions.
Proving humaneness for Malaysia’s future digital trust
Malaysia’s digital economy is accelerating, with e-commerce revenue reaching RM937.5 billion in the first nine months of 2025. As this growth continues, the country faces a new priority: ensuring that digital interactions can be trusted. Authenticity is becoming just as crucial as infrastructure upgrades, especially as Malaysians increasingly rely on online platforms for daily decisions.
Privacy-preserving human verification offers a practical way to protect the integrity of these interactions without slowing innovation. By enabling individuals to prove their humanness while keeping all personal data on their own devices, World supports a digital ecosystem grounded in genuine participation rather than synthetic activity.
Malaysia’s digital future will depend not only on adopting new technologies, but on whether people trust the platforms they use, and proving humanness is emerging as a key part of that trust.
-- BERNAMA
About World
World is intended to be the world’s largest, most inclusive network of real humans. The project was originally conceived by Sam Altman, Max Novendstern and Alex Blania and aims to provide proof of human, finance and connection for every human in the age of AI. Find out more about World at world.org and on X.
Ryuji Wolf is Regional General Manager of Meridian East, an operating partner of World.