How Audiences and Technologies Are Reshaping Entertainment

Image
Tencent’s Hunyuan AI

The power of influence over the future of the entertainment industry and its business models is shifting. Last year’s rapid advances in technology like generative AI and its adoption signal that the industry is poised to enter recurring accelerated disruption. The trajectory of these changes to how and where entertainment is created, distributed and consumed are currently being led by audiences and the companies behind innovative technologies.

The Dance Between Platforms, Creators and Audiences

Media-sharing platforms are no longer just learning from user behavior to create a game-like loop that keeps audiences engaged to drive subscription value or advertiser benefits. Most are actively training on audience and creator data, which can potentially inform their own content and product decisions. The same can be said for many generative AI tools that creators are using. As a result, the balance of power in storytelling is increasingly shifting toward the platforms themselves. While some creators still earn significant revenue, and audiences enjoy free or low-cost quality entertainment, the platforms are reaping greater rewards from the data and insights they collect. 

In addition, social media applications through to app stores are providing creators with varying levels of valuable data about their own content and audiences. Some are offering more insights which creates an incentive for creators to publish content with them, giving the platforms more content to monetize with and learn from. 

OpenAI’s Sora

Platform audiences are also being incentivized to share more content. Any audience member contributing content to a platform can be considered a creator of sorts, and their content is getting support to become more engaging. Tools like Meta AI, Snap AI and even YouTube rolling out the integration of Veo into their platforms will make it easier for audience members to have fun creating compelling content to share. Off-platform tools from OpenAI’s Sora through to Canva are also making it easier for more audience members to become creators.

Basic use of many of these media-sharing platforms and even some games are helping to grow the amount of content that AI can be trained on. Most popular social platforms are automatically opting users into training on their data, requiring a manual opt-out. Niantic’s games like Pokémon Go are using scans from players to train an AI model to map physical locations around the world. LinkedIn’s gamified Top Voices badge requires users to continue to contribute valuable insights on how they approach their areas of focus, for free, in order to keep it. Many of these platforms are also associated with generative AI tools of varying abilities, from Google Deepmind’s Veo 2, to Tencent’s Hunyuan AI

Could these platforms eventually become the biggest generators of synthetic content themselves, in addition to controlling the algorithms that decide what narratives and experiences are easily discovered by audiences? Now is the time for industry stakeholders to respond with guidelines and laws that fairly protect the interests and livelihoods of all involved in our ecosystem – audiences and businesses included. 

Characters As A New Medium

MasterClass On Call

2019 was the year that virtual beings started to arrive in entertainment, and 2025 is the year that they may start to break through as a medium in and of themselves, thanks to technical advances and increased consumer interest in interacting with content and generative AI bots. Just as content can be accessed on demand, so too can characters. While characters are core to a creator-defined story world, audiences can now invite AI-based characters into their worlds. For instance, ChatGPT users are already having voice conversations with personalities they bring to life through basic prompts. X is expanding the integration of a clever AI called Grok into multiple features of its user experience  MasterClass On Call is letting audiences interact with bots trained to behave as select experts on their platform.

Mark Zuckerberg on a video call with the digital double of Don Allen Stevenson III at Meta Connect 2024

Synthetic interactions with fictional characters through to celebrities can go beyond voice. Mark Zuckerberg recently demonstrated Meta’s ability to create digital doubles, with user consent, by having a live video call with a digital double of creative technologist and futurist Don Allen Stevenson III, as Stevenson looked on. Meta is also making it easier for creators to build generative-AI powered non player characters for audiences to interact with in fully immersive social worlds. 

Apps like Replika let audiences create their own ‘virtual AI friend’ that can even come to life through augmented reality. Soon, computer vision integrations through device cameras will help the AI behind characters to better understand the context of the audiences’ world, adding another layer of immersion. And the more audiences interact with these characters, the more the platforms will learn about them, create more compelling offerings for them, and maybe even leverage these insights for platform advertisers.

With available tools, both creators and audiences can craft their own interactive characters, but this also means that deepfakes or eerily similar-looking people are even easier to generate without the consent of IP holders, on-camera talent or even audience members.

Custom Entertainment

Photograph of the Tulpamancer setup at SXSW 2024

Platforms with engaging content and successful algorithms establish audience loyalty, as they gain trust in their curation. Competition around successful curation is not going away. But customization, made possible with generative AI will take personalized entertainment solutions to an entirely new level. 

Long-form custom-generated content may still be cost-prohibitive for many creators and platforms, but it is already possible. VR experience Tulpamancer, for example, offers a glimpse into the very near future of content. A narrated, fully immersive story about each individual audience member’s past memories and possible future is generated in real-time based on answers provided to an AI-based character before putting on the VR headset. As platforms increase their knowledge of each individual audience member, content could be customized without requiring any prior user input.

Short-form customized content is less cost-prohibitive and is just starting to emerge in simple form. For example, TIME AI has a feature for Chat-Enabled Articles that allows users to ask questions to learn more about what interests them in the articles through chat. This also keeps them interacting with TIME content longer.

As consumer brands invest more in storytelling to drive engagement, especially on digital platforms, their budgets and access to deep audience insights will likely help to accelerate the innovation behind story customization. “The obvious starting place for brands is in personalization. AI will allow a level of hyper-personalization that has never been possible...and every brand is already racing toward that,” shares Eric Shamlin, CEO of Secret Level. “But what I find more interesting is the prospect of co-creation. Brands can now train a model on their brand and then open it to their audience. Audiences will then be able to co-create their own ads, content and experiences with their favorite brand. From music and social media, to gifts and new product ideas, fans will be able to co-author all new types of collaborations thanks to the guardrails and flexibility that a well-trained AI agent will allow." 

Personalized experiences can take place before even choosing content, shares Sharon Flynn, Principal of Data Strategy at Publicis Sapient. Consider an audience member arriving into any “environment that offers you choice,” whether it is digital or in person. “Disney played with this…when you arrive at a character encounter, they were able to actually act as if they knew the child.” The level of personalization that could be possible if a platform has more context about an audience member beyond trends in their content interests is infinite.

Easier Entry to All Mediums

Google DeepMind’s Genie 2

From podcasts to interactive virtual worlds, platform feature advances and access to audience insights are making it easier for audience members to become a part of content creation and for content creators to extend their stories to new mediums. After all, platforms always seek to have more engaging content, and they are offering an opportunity for people to reach new fans and revenue streams. 

For example, YouTube has benefited from expanding into supporting podcast content - a medium that was traditionally audio-only. When it comes to podcasting, “video will be non-negotiable in 2025,” shares Fatima Zaidi, CEO of Quill. “Successful podcasts will be those that evolve with their audience and adapt to new formats like video—securing relevance in a fast-moving media landscape. Spotify alone reported more than 250,000 video podcasts on its platform along with 170 million users having watched a video podcast. Embracing video expands reach, opens the doors to visual-first platforms, and supports the creation of promotional content.”

In addition to the aforementioned tools making it easier for creators and audiences to produce video content to share, be on the lookout for tools in various stages of research and development like Genie 2, which is already able to generate playable 3D environments or NotebookLM, which can generate short podcast episodes just by uploading a PDF. Check out the podcast episode it created using this article here: 

Please note that this is AI-generated using NotebookLM solely based on the PDF of this article, and therefore some points shared in the podcast may not be accurate.

AI as a Story Hunter 

Unsplash photograph by Talia Cohen

AI can help uncover true stories that have so far been technically impossible to find. For example, organizations like the Earth Species Project are building machine-learning models to decode communication between species. New storytellers could be both non-human and non-machine. Imagine what large language model platforms will even be able to learn about humanity as a whole, based on their training and our interaction with them. There is much to be excited about as AI continues to be used to discover and inspire stories that may help our world evolve.


Laura Mingail
Laura Mingail supports companies in anticipating and navigating the future of entertainment, as well as launching content and tools that thrive in an ever-evolving landscape. Through her consultancy, Archetypes & Effects, she helps shape impactful strategies for clients. She also contributes to the entertainment industry's evolution as a media contributor, Advisory Board Member for SXSW, and as a speaker at leading events and institutions, such as Augmented World Expo, Series Mania, SXSW, CES, Tribeca Festival, the University of Toronto, and Mensa Canada.
Read Bio