Voice & live sessions
The headline is voice-first. A live session sounds like a phone call or a video call: the user speaks, the AI Fellow responds, and the exchange happens in real time. The Fellow detects when the user starts and stops talking, and the user can interrupt the Fellow mid-sentence the same way they would interrupt a person.
A session is also more than voice. Camera, screen, and text sharing layer on top, each one user-controlled, each one off by default unless you turned it on for that AI Fellow or that Learning Path.
Session length
A single live session is capped at 45 minutes. When the cap is reached the conversation ends and the user starts a new one if they want to keep going. Past conversations stay available, so picking up where they left off in a follow-up session is straightforward. See Conversation continuity.
Camera sharing
Off by default. The user decides whether to turn on the camera, and the Fellow only sees a frame when the user explicitly activates it.
When it's on:
- The Fellow gets one frame as the camera turns on, acknowledges briefly, and goes back to listening.
- It can ask for a fresh frame mid-conversation if it would help. ("Can I see what you mean?")
- It doesn't comment on what it sees unless you invited it to or it's directly relevant.
- The user can turn the camera off whenever they want. Nothing is captured after that.
Useful for: practising body language for presentations, holding up a physical document or product, or just adding a human touch to the conversation.
Privacy posture. Frontierz doesn't record or store camera images beyond the active session. The user is in charge of when, for how long, and whether the Fellow ever sees them at all.
Screen sharing
Off by default. The user clicks share, picks a screen, window, or tab in the browser's native dialog, and the Fellow can start seeing what's on screen.
When it's on:
- The Fellow takes adaptive snapshots. More often when the user is talking through the screen, less often when they're reading or thinking.
- The image quality is high enough to read fine UI text. Code, slide decks, dashboards, spreadsheets are all legible.
- It absorbs what's on screen without interrupting, and only chimes in when something is relevant or when the user asks ("What do you see?").
- Sharing stops the moment the user clicks stop, or after the time limit you set.
Useful for: walking through documents, navigating software, reviewing spreadsheets or dashboards, getting feedback on a workflow as the user performs it.
Privacy posture. The Fellow only sees what the user explicitly shares. Screens aren't recorded or stored beyond the session. Only the most recent frames stay in working memory while the conversation is live.
Text sharing
Some content reads better than it dictates. URLs, code snippets, prompts, templates. The Fellow can send text through a slide-out panel during the conversation, and the user copies it to clipboard with a single click.
The conversation stays voice-first. Text sharing exists for the things that don't belong in spoken language.
Conversation continuity
Users can pick up where they left off. The Fellow remembers the previous conversation's context when they come back, so they don't have to re-introduce themselves or restate the goal of last week's session.
What this looks like in the user view:
- A list of past conversations with titles and summaries.
- Read access to the full transcript of any of them.
- An option to start a new session that continues from a specific past one.
Continuity is configurable per AI Fellow. For some Fellows (a generic demo, an external pilot) you don't want any memory at all. For others (an ongoing coaching tool) memory is half the value.