Your players already have the hardware.

Ship Your Game
With AI

A new medium for game design — powered by the GPU in every player's machine.

Epic MegaGrants Recipient
// signal
~8¢ per copy sold
0 servers to run
100% on player hardware

Built by veterans of
game dev & applied AI.

HOW_IT_WORKS

How It Works

Your game talks to a plugin. The plugin talks to a runtime. The runtime uses the player's GPU.

Data Flow
● all on-device
Your Game
Unreal / Unity
Your engine with the Tryll plugin installed
Integration
Tryll Plugin
Simple API — prompts, agents, tools, actions
Player Device
Tryll Runtime
Runs on player GPU — NVIDIA, AMD, Intel auto-detected
Latency
<50ms
Cloud Cost
$0
Data Sent
None
Offline
Yes
SDK_ROADMAP

Integration Roadmap

The Tryll SDK ships first for Unreal Engine. Get early access before we open the waitlist.

tryll-sdk — release-schedule
LIVE
$ tryll sdk status --all-engines
Unreal Engine Active
LLM  ·  STT  ·  TTS  ·  VLM  ·  Agents
→ Alpha access available — request below
Unity Next
LLM  ·  STT  ·  TTS  ·  VLM  ·  Agents
→ Full capability parity with Unreal
C++ / Native Planned
LLM  ·  STT  ·  TTS  ·  VLM  ·  Agents
→ Engine-agnostic, for custom builds
$
  1 API surface  ·  0 cloud dependencies  ·  runs on player GPU
Request Early Access
THE_PLATFORM

Why Studios Don't Have AI Yet

We talked to hundreds of studios. They all want AI in their games. Three things stop them.

Cloud Is Too Expensive

Cloud AI charges per token. At scale, a single game can rack up millions in API costs per month. Success becomes unsustainable.

With Tryll, tokens are free. Runs on player GPUs.

Data Processing Is Hell

GDPR, the AI Act, export laws — sending player conversations to third-party servers creates a legal and compliance nightmare most studios won't touch.

With Tryll, personal data never leaves the device.

No One Wants Third-Party Dependency

If the API goes down, your game breaks. If pricing changes, your margins vanish. Studios won't bet their product on someone else's uptime.

With Tryll, everything works fully offline.

What You Can Build

Platform capabilities that ship with every Tryll-powered game

Knowledge Systems

Connect your game's documentation and lore to an AI that answers player questions in real-time. Reduces support load, eliminates alt-tabbing.

Persistent Memory & Relationships

NPCs that remember conversations, form opinions, and evolve over time. Build deep companion systems without thousands of dialogue branches.

Agent Actions

Let AI trigger game mechanics — spawn enemies, modify environments, distribute rewards. Ship dynamic gameplay through API calls, not hardcoded logic.

Runtime Content Generation

Generate quests, lore, and world events based on player context. Infinite content without static databases.

PRICING

Pricing That Scales With Your Game

Start free. Pay when your game succeeds.

Tryll Engine

On-device AI for your game

~8¢ per copy sold

at $15 game price — scales with price

Book a Demo
  • Unreal & Unity plugins
  • Game state awareness & context-aware AI
  • Custom AI characters & companions
  • Voice integration (TTS/STT)
  • AI agents that act in your game world
  • Dynamic content generation
// calculator no upfront cost
Game price
$10
Copies sold
100,000
$0.08
per copy sold
revenue $1,000,000
tryll $2,000
// try_the_tech

Want to see local AI in your game before integrating?

Download Tryll Assistant — a free AI overlay that runs on any game.
See how LLM inference works on player hardware, no integration needed.

Download Free on Steam
FAQ

Frequently Asked Questions

Everything you need to know about the Tryll platform

8GB of VRAM is enough to run both a model and a game. However, more VRAM is better - graphics-intensive games at high settings may consume most of the memory, which can slow down the model. For the best experience, we recommend 12GB+ VRAM. Tryll automatically selects the most efficient model and quantization for the user's system to ensure smooth gameplay.

According to the Steam Hardware Survey (see VRAM section), about 2/3 of players already have 8GB+ VRAM cards, which are good enough to run local AI:

  • ~1/3 of players have 8GB VRAM
  • ~1/3 of players have 12GB+ VRAM

As open-source models get smaller and faster, and gamers continue upgrading hardware, this percentage grows - unlocking scalable deployment without cloud reliance.

Local AI advantages:

  • Zero cost per player - no API or infrastructure bills
  • Offline capable - your game keeps working without an internet connection
  • Private by design - no personal data leaves the device

Cloud-based AI adds legal and UX risks:

  • Immersion-breaking outages during play sessions
  • Legal compliance challenges (GDPR, AI Act, export laws)

Tryll's local-first approach is not just cheaper - it's safer, faster, and scalable to millions of players.

This is the core problem Tryll solves — like how DirectX abstracts GPU differences for graphics. Tryll's Model Manager automatically:

  • Detects available GPU, VRAM, and RAM on first launch
  • Selects the optimal model size and quantization (4B-14B)
  • Auto-updates models as better open-source options become available

You write one integration. The platform handles every hardware configuration. No setup required from players or developers.

Tryll is designed so that AI is always additive to your game. AI inference runs separately from core gameplay, so if anything goes wrong, the game falls back to default behavior. We're building toward full process isolation, but even today the architecture ensures AI issues don't take down your game.

We're building Unreal and Unity plugins first, with an SDK API to follow. The platform is engine-agnostic at its core — the engine-specific plugins are convenience layers on top of the same underlying runtime. If you're working with a custom engine, reach out and we'll work with you on integration.

Still have questions?

Contact us on Discord
GET_STARTED

Start Building

Talk to our team about integrating Tryll into your game.

Contact us at:

team@tryllengine.com