As Venezuela geared up for carnival season last month, an English-language video was posted on the “House of News” YouTube channel. Its presenter Noah hailed an alleged tourism boom as millions of citizens flocked to the country’s Caribbean islands to party.
The report, which was widely publicized in media sympathetic to President Nicolás Maduro’s socialist government, suggested that claims about widespread impoverishment in oil-rich Venezuela had been “exaggerated”.
Another report claimed that the anti-Maduro caretaker government had been implicated in the alleged mismanagement of $152 million in funds before its recent disbandment, with presenter Emma concluding that “Venezuelans don’t feel it there is opposition to the government”.
But both stories were fake and the two newsreaders don’t exist. These are avatars, based on real actors, that were generated using technology from Synthesia, a London-based artificial intelligence company. Their American accents were synthesized, their talking faces generated by machine learning algorithms.
Last week, YouTube suspended five accounts, including House of News, for sharing government-aligned misinformation. But the emergence of deepfakes and AI-generated media represents a new frontier in Venezuela’s propaganda and disinformation campaign, raising concerns about the potential influence on a population that has little access to reliable information. due to widespread censorship both online and offline.
“In Venezuela, there is an information desert where disinformation can thrive,” said Adrián González, director of Cazadores de Fake News, a Caracas-based disinformation monitor. “And now the technology is here to make compelling fake news videos.”
González said the network of outlets in Venezuela spreading propaganda was extensive, ranging from official media, independent but allied media, and fake news providers. On social media, posters have used automation tools to boost government talking points, helping messages reach more people.
Over the past year, generative AI technologies — software that can create images, videos, and text based on user prompts and descriptions — have grown in popularity. Products such as Dall-E and ChatGPT are widely adopted by users ranging from school children to elite computer programmers.
But there have also been concerns about the software’s potential to generate misinformation. Documented examples of interactions with generative AI agents such as chatbots show how they spit out false information, called “hallucinations”, display biases and spawn conspiracy theories.
Synthesia’s technology, based on a type of AI technique known as deep learning, generates videos featuring avatars. These avatars speak from a user-generated script, in a variety of languages, and videos can be created in 10 minutes. He says he produces around 10,000 videos a month and his clients range from advertising company WPP to the UK’s NHS, which uses him to create health information videos in different languages. The startup has raised $66 million from Silicon Valley investors including Kleiner Perkins and GV, formerly Google Ventures.
Synthesia said the Venezuelan customer was banned from using its service as soon as the video was discovered on Twitter by one of the company’s employees. “We have strict guidelines for the type of content we allow to be created on our platform. We enforce our terms of service and ban users who violate them,” he said.

Synthesia added that it has put new restrictions in place on the use of its technology, including a ban on all news-style content, a digital watermark that would mark videos as AI-generated, and a review process. stricter for each video.
Synthesia has faced other instances of misuse. In January, company-generated political disinformation videos were circulating in Mali, and last month US-based news analysts Graphika uncovered a pro-China operation promote videos produced by Synthesia.
Under its ethical guidelines, Synthesia said it would only release its product to “trusted customers” after an “explicit internal screening process.” When asked why those policies failed in the Venezuelan case, the company said it had tightened its regulations so its small content moderation team could see if a user’s requests had ever been rejected. to help report potential abuse and repeat offenders.
Reliable information is a scarce resource in Venezuela, which has the largest proven oil reserves in the world but is struggling with inflation at an annual rate of 350% this year, according to local Ecoanalítica researchers. Staple foods and medicines are often scarce or prohibitively expensive. The strict sanctions imposed by the United States in 2019 have limited the government’s room for maneuver despite the relaxation of exchange controls. More than 7 million Venezuelans have fled the country since 2015.
The economy is showing signs of modest improvement, but Maduro’s government has grown more authoritarian over a decade in power, cracking down on dissent while co-opting or shutting down mainstream news media. The main newspaper El Nacional stopped its print edition in 2018. Last year, state-controlled and private internet service providers blocked access to independent news sites.
Armies of users on Twitter and other platforms help promote Maduro’s agenda. ProBox, a civil society organization that tracks disinformation on social media in Venezuela, has documented instances where the government rewards people who promote regime talking points through a social credit system known as name of “map of the fatherland”.
Common topics promoted include economic recovery, improving living conditions and shortcomings of the fractured opposition. Government accounts share the propaganda, which circulates freely on social media. The House of News videos have received hundreds of thousands of views on YouTube. State broadcasters also aired them.
The content could be very compelling, said Maria Virginia Marin, head of ProBox.
“When you have a so-called journalist speaking in English in what looks like international media and selling you a reality that you don’t see, it begs the question that maybe it exists and you’re just outside of it. this,” she said. said.
The videos were partly aimed at international audiences, Marín added. “The goal is to blur the international debate on Venezuela and cover up the reality of what is happening here.”