Shemar Bryan | AI is under fire, and why Jamaicans should care
At first glance, the recent news that Malaysia and Indonesia, having banned X’s (formerly Twitter) artificial intelligence tool, Grok, might seem like a distant, foreign issue, something happening “over there”, with little relevance to Jamaica. However, this development should be a serious wake-up call for Jamaicans, our lawmakers, and our digital regulators. Reality is much more serious.
The same technology being restricted abroad is already accessible here, and the harm it enables, particularly, the creation of non-consensual, sexually explicit AI-generated images will directly hamper privacy rights, skyrocket child-protection concerns, damage our already fragile digital ecosystem, and even worsen our battle against sexual abuse in the country.
In January 2026, Malaysia and Indonesia became the first countries to ban Grok outright, citing failures to prevent the generation of pornographic and abusive deepfake content. Authorities in both countries stated that the tool posed unacceptable risks to privacy, dignity, and online safety, particularly for women and children.
This was not a symbolic move. These governments concluded that post-harm moderation was insufficient and that the technology itself needed to be restricted until safeguards were in place.
NOT EXEMPT
Jamaica is not exempt from this problem. In fact, we may be more vulnerable than many larger countries. Jamaicans are among the most active social media users in the Caribbean, burdened by a historical struggle with sexual offences, cyberbullying, revenge pornography, among other issues that more often than not are left unresolved.
Introducing AI tools capable of generating fake nude or sexual images of real people into this environment creates a dangerous imbalance between easily accessible tools and limited legal consequences.
If such an image is created of a Jamaican teacher, student, public figure, or private citizen, the harm does not disappear because the image is “fake”. Reputational damage, psychological trauma, and social stigma are very real.
Unlike Malaysia and Indonesia, the United Kingdom has taken a legislative approach rather than an outright ban. In response to growing misuse of AI tools like Grok, the UK government is introducing laws that would criminalise the creation and/or request for the generation of intimate images of a person without their consent, regardless of whether those images are later shared. This is a crucial shift in transformative societies.
Traditionally, many laws focus on punishing distribution. The UK recognises that with AI, the harm begins at creation. Once an image exists, even briefly, it can be screenshotted, shared, leaked, or used for a number of immoral purposes, including blackmail. This approach reflects a broader truth. It is far better to prevent the harm than to chase it after the damage is done.
At present, Jamaica has no specific legislation that directly addresses AI-generated deepfake pornography, non-consensual “nudification” of images, or the creation as opposed to the distribution of synthetic sexual content.
While existing laws, such as provisions under the Cybercrimes Act or Constitutional Rights to privacy, may offer some relief, they were not designed for generative AI. Victims are often left navigating uncertain legal terrain, with remedies that are slow, expensive, or ineffective.
This is particularly troubling when children or young adults are involved. AI-generated images that resemble minors raise serious concerns about child sexual abuse material, even when no real photograph was used. Regulators in Indonesia cited precisely this risk when justifying their ban.
DANGEROUS ASSUMPTIONS
One of the most dangerous assumptions is that Jamaica can simply wait and respond later, but AI does not spread slowly. Once tools like Grok become embedded in everyday digital culture, harm scales rapidly. No one could have predicted the rate at which AI-generated videos and images could quickly imitate and blur the lines of realities. A delay in action will only result in far greater issues apart from those present.
Images spread faster than courts can act. Psychological harm cannot be undone. Social consequences often persist long after content is removed. This is why international regulators are shifting away from purely reactive approaches. Preventive legislation banning or tightly regulating harmful AI uses is increasingly seen as the only effective solution. Therefore, it is important that the Jamaican Government adopt this proactive approach in the creation of legislation that diversely deals with the issue of AI, users of AI, and developers of AI software.
The capabilities of AI far exceed only sexual or privacy-related offences. Imagine a fabricated image of a student circulating on WhatsApp, a political opponent weaponising AI images during an election season, scammers using AI to impersonate loved ones or even an ordinary individual using AI to tamper with or generate ‘evidence’ to be used in court proceedings. These are not far-fetched scenarios nor are they new. These are real issues already occurring globally, and Jamaica is not immune.
It is of the utmost importance that Jamaica take the proactive approach, following the UK’s example in their amendment to their Data (Use and Access) Act. Similar amendments may be made to Jamaica’s Data Protection Act and related legislation.
Shemar Bryan is an attorney-at-law. Send feedback to columns@gleanerjm.com.
