Industry: MEDIA
GenAI Platform for automated media digests
Context:
A European media organization wanted to enhance its editorial workflows and enable teams to transform large volumes of incoming information into curated content more efficiently.
Editors were reviewing thousands of news items each day to prepare weekly digests and targeted newsletters for different audience segments. The process required significant manual effort and made it difficult to scale editorial output as information volume continued to grow.
Solution:
We developed a GenAI-powered content intelligence platform that helps content operations teams transform large streams of incoming information into structured, publication-ready outputs.
The system supports the content workflow by:
-
analyzing large volumes of incoming news signals
-
identifying key topics, themes, and insights
-
generating structured content drafts tailored to different formats and audience segments
-
validating outputs through automated quality and consistency checks
Content teams remain fully in control of the final output, with the platform designed to augment human decision-making rather than replace it.
Architecture & Technology:
The platform was implemented as a modular AI system running on Google Cloud Platform.
Core components include:
-
microservice architecture deployed on GKE (Kubernetes)
-
backend services built with Python and FastAPI
-
asynchronous communication via Google Pub/Sub
-
infrastructure managed using Terraform
The AI layer uses a multi-model strategy, dynamically routing requests between different LLM providers including Gemini and OpenAI, with automatic fallbacks to ensure reliability and performance.
To provide contextual grounding, the system implements Retrieval-Augmented Generation (RAG) using PostgreSQL with the pgvector extension. This enables semantic retrieval of relevant content and improves output consistency.
Content generation is orchestrated through multi-agent workflows, where specialized agents handle tasks such as research, summarization, writing assistance, and validation.
The entire pipeline is monitored using Langfuse, providing full observability across prompts, model usage, latency, and output quality.
Deliverables:
Enabled content teams to process large volumes of incoming signals in a single automated pipeline, eliminating the need for manual pre-selection of sources.
Reduced the effort required to prepare segment-specific content by automatically generating structured drafts ready for human review.
Improved consistency and reliability of outputs through automated validation and style-alignment checks.
Introduced a flexible multi-model AI architecture, allowing continuous optimization of quality, response time, and operational cost without changing the production workflow.
