Infrastructure & Identity

Identity and Account Separation

Dedicated accounts for the assistant. No shared credentials or identity with the human operator.

Identity and Account Separation

Purpose of This Section

This document defines how identity is modeled, isolated, and governed within the architecture. Identity separation is treated as a foundational control rather than an implementation detail. The reasoning is straightforward: shared identity implies shared failure. When an assistant operates under a human’s credentials, any mistake, compromise, or misconfiguration occurs as that human, with that human’s access, and is often indistinguishable from the human’s own actions after the fact. This architecture eliminates that class of risk by ensuring that no identity is ever shared between the operator and the assistant.

Identity as a Security Boundary

In many AI deployments, assistants gain access by inheriting a human’s credentials, session, or environment. The appeal is obvious — it requires no setup — but the consequences are severe. Accountability collapses because logs cannot distinguish between human-initiated and assistant-initiated actions. Silent privilege escalation becomes possible because the assistant can reach anything the human can reach. And revocation becomes destructive, since disabling the assistant’s access means disabling the human’s.

In this architecture, identity itself is a hard boundary. The assistant has its own accounts, uses its own credentials, and maintains its own sessions. It is never authenticated as the human. Conversely, the human never delegates personal credentials, never shares active sessions, and remains the sole owner of their digital identity. The two identities may collaborate — they do not overlap.

Assistant-Owned Accounts

The assistant operates using accounts created explicitly for its use. These accounts exist solely to support collaboration and task execution. They are not used by any human, carry no personal data, can be revoked without collateral damage to the operator, and are intentionally narrow in scope.

In practice, this means the assistant holds a dedicated email account, a dedicated calendar, a dedicated GitHub account, and dedicated API credentials for external services. Together, these form the assistant’s professional identity — a self-contained set of accounts that can be audited, restricted, or decommissioned independently of the operator’s own infrastructure. The assistant’s identity is not an extension of the human’s; it is a parallel identity with its own lifecycle.

Authentication

Strong authentication is required for all assistant-owned accounts. Weak or reusable secrets are avoided as a matter of policy.

Where supported, passkeys are used instead of passwords. Passkeys provide resistance to phishing, eliminate password reuse, and reduce credential leakage risk. The goal is not convenience but the removal of entire attack classes: a credential that cannot be typed cannot be exfiltrated through a prompt injection, and a credential that is bound to a specific device cannot be replayed from a compromised server.

When passkeys are not available, multi-factor authentication is mandatory. Time-based one-time passwords are preferred. Backup codes are stored offline and are never accessible to the assistant itself, ensuring that recovery from a locked-out state requires the operator’s direct involvement.

Session Isolation and Revocation

Sessions created by assistant-owned accounts are treated as disposable. They can be invalidated at any time without coordination. Restoring the assistant from a snapshot does not resurrect previous sessions — re-authentication is expected after any recovery. This ensures that persistence of state does not imply persistence of authority. A restored assistant must prove its identity again, just as a returning employee must badge in again after an absence.

No Credential Forwarding

The assistant is never permitted to store human credentials, proxy authentication on behalf of the human, or act using a human’s session or token. This rule is absolute within the current architecture.

When an action requires the human’s authority — signing a document, approving a financial transaction, merging code into a protected branch — the assistant’s role is to prepare the work, present it for review, and wait for the human to execute. The assistant may do everything up to the point of commitment. The commitment itself belongs to the human. This division is not a limitation on the assistant’s usefulness; it is the mechanism by which the architecture preserves meaningful human authority over consequential actions.

Blast Radius

Each assistant-owned identity is scoped to minimize the damage that results from its compromise. No single account provides lateral access to unrelated systems. Compromise of the assistant’s email does not imply access to its GitHub account, and compromise of its GitHub account does not expose API credentials for external services. Financial exposure is capped where applicable through the budgeting mechanisms described in later sections.

This scoping ensures that failures remain local and recoverable. An identity breach is a contained incident, not a cascading compromise.

Lifecycle and Revocation

Assistant-owned accounts are subject to explicit lifecycle management. They can be suspended or deleted without affecting the operator’s own accounts. Inactivity-based deletion is preferred where the service provider supports it, so that accounts do not persist indefinitely if the assistant is decommissioned or abandoned. Revocation procedures are documented and rehearsed — the operator should be able to disable the assistant’s entire identity within minutes, not hours, if the need arises.

If the assistant ceases to operate, its identities are designed to naturally expire rather than linger as orphaned accounts with stale but potentially exploitable credentials.

Threat Model Implications

This identity model mitigates credential leakage via prompt injection (because the assistant holds no human credentials to leak), silent privilege escalation (because the assistant’s permissions are its own, not the operator’s), cross-account compromise (because identities are scoped and isolated), and irreversible identity damage (because the assistant’s accounts can be destroyed without affecting the human’s).

It does not attempt to protect against compromise of the identity provider itself or physical coercion of the human operator. These risks are acknowledged and accepted within the overall threat model.

Summary

By enforcing strict separation between human and assistant identities, the architecture ensures clear accountability, limited blast radius, predictable recovery, and the absence of silent authority transfer. Identity is a first-class security boundary — the foundation on which collaboration, delegation, and revocation all depend.


This document defines who the assistant is allowed to be. Subsequent sections describe how this identity participates in collaboration and governance without inheriting human authority.