Memory¶
Persistent agent memory -- protocol, retrieval pipeline, shared org memory, consolidation, and archival.
Protocol¶
protocol
¶
MemoryBackend protocol -- lifecycle + memory operations.
Application code depends on this protocol for agent memory storage and retrieval. Concrete backends implement this protocol to provide connection management, health monitoring, and memory CRUD operations.
MemoryBackend
¶
Bases: Protocol
Structural interface for agent memory storage backends.
Concrete backends implement this protocol to provide per-agent memory storage, retrieval, and lifecycle management.
All CRUD operations (store, retrieve, get, delete,
count) require a connected backend and raise
MemoryConnectionError if called before connect().
Attributes:
| Name | Type | Description |
|---|---|---|
is_connected |
bool
|
Whether the backend has an active connection. |
backend_name |
NotBlankStr
|
Human-readable backend identifier. |
connect
async
¶
Establish connection to the memory backend.
Raises:
| Type | Description |
|---|---|
MemoryConnectionError
|
If the connection cannot be established. |
disconnect
async
¶
health_check
async
¶
Check whether the backend is healthy and responsive.
Returns:
| Type | Description |
|---|---|
bool
|
|
bool
|
|
Note
Implementations should catch connection-level errors,
log them at WARNING level with full exception context,
and return False. The caught exception must never be
silently discarded. Only raise for programming errors
(e.g. backend not initialized).
Source code in src/synthorg/memory/protocol.py
store
async
¶
Store a memory entry for an agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
request
|
MemoryStoreRequest
|
Memory content and metadata. |
required |
Returns:
| Type | Description |
|---|---|
NotBlankStr
|
The backend-assigned memory ID. |
Raises:
| Type | Description |
|---|---|
MemoryConnectionError
|
If the backend is not connected. |
MemoryStoreError
|
If the store operation fails. |
Source code in src/synthorg/memory/protocol.py
retrieve
async
¶
Retrieve memories for an agent, ordered by relevance.
When query.text is None, performs metadata-only
filtering (no semantic search).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
query
|
MemoryQuery
|
Retrieval parameters. |
required |
Returns:
| Type | Description |
|---|---|
tuple[MemoryEntry, ...]
|
Matching memory entries ordered by relevance. |
Raises:
| Type | Description |
|---|---|
MemoryConnectionError
|
If the backend is not connected. |
MemoryRetrievalError
|
If the retrieval fails. |
Source code in src/synthorg/memory/protocol.py
get
async
¶
Get a specific memory entry by ID.
Returns None when the entry does not exist --
MemoryNotFoundError is never raised by this method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
memory_id
|
NotBlankStr
|
Memory identifier. |
required |
Returns:
| Type | Description |
|---|---|
MemoryEntry | None
|
The memory entry, or |
Raises:
| Type | Description |
|---|---|
MemoryConnectionError
|
If the backend is not connected. |
MemoryRetrievalError
|
If the backend query fails. |
Source code in src/synthorg/memory/protocol.py
delete
async
¶
Delete a specific memory entry.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
memory_id
|
NotBlankStr
|
Memory identifier. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
|
Raises:
| Type | Description |
|---|---|
MemoryConnectionError
|
If the backend is not connected. |
MemoryStoreError
|
If the delete operation fails. |
Source code in src/synthorg/memory/protocol.py
count
async
¶
Count memory entries for an agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
category
|
MemoryCategory | None
|
Optional category filter. |
None
|
Returns:
| Type | Description |
|---|---|
int
|
Number of matching entries. |
Raises:
| Type | Description |
|---|---|
MemoryConnectionError
|
If the backend is not connected. |
MemoryRetrievalError
|
If the count query fails. |
Source code in src/synthorg/memory/protocol.py
Config¶
config
¶
Memory configuration models.
Frozen Pydantic models for company-wide memory backend selection and backend-specific settings.
MemoryStorageConfig
pydantic-model
¶
Bases: BaseModel
Storage-specific memory configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
data_dir |
NotBlankStr
|
Directory path for memory data persistence. |
vector_store |
NotBlankStr
|
Vector store backend name. |
history_store |
NotBlankStr
|
History store backend name. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
Validators:
-
_validate_store_names -
_reject_traversal
data_dir
pydantic-field
¶
Directory path for memory data persistence. Default targets a Docker volume mount -- override for local development.
MemoryOptionsConfig
pydantic-model
¶
Bases: BaseModel
Memory behaviour options.
Attributes:
| Name | Type | Description |
|---|---|---|
retention_days |
int | None
|
Days to retain memories ( |
max_memories_per_agent |
int
|
Maximum memories per agent. |
consolidation_interval |
ConsolidationInterval
|
How often to consolidate memories. |
shared_knowledge_base |
bool
|
Whether shared knowledge is enabled. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
retention_days(int | None) -
max_memories_per_agent(int) -
consolidation_interval(ConsolidationInterval) -
shared_knowledge_base(bool)
EmbedderOverrideConfig
pydantic-model
¶
Bases: BaseModel
User-facing embedder override configuration.
Allows users to override the auto-selected embedding model via
company YAML config, runtime settings, or template config. All
fields are optional -- None means "use auto-selection".
Attributes:
| Name | Type | Description |
|---|---|---|
provider |
NotBlankStr | None
|
Embedding provider name override. |
model |
NotBlankStr | None
|
Embedding model identifier override. |
dims |
int | None
|
Embedding vector dimensions (required when |
Config:
frozen:Trueextra:forbidallow_inf_nan:False
Fields:
-
provider(NotBlankStr | None) -
model(NotBlankStr | None) -
dims(int | None)
Validators:
-
_model_requires_dims
CompanyMemoryConfig
pydantic-model
¶
Bases: BaseModel
Top-level company-wide memory configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
backend |
NotBlankStr
|
Memory backend name (validated against |
level |
MemoryLevel
|
Default memory persistence level. |
storage |
MemoryStorageConfig
|
Storage-specific settings. |
options |
MemoryOptionsConfig
|
Memory behaviour options. |
retrieval |
MemoryRetrievalConfig
|
Memory retrieval pipeline settings. |
consolidation |
ConsolidationConfig
|
Memory consolidation settings. |
embedder |
EmbedderOverrideConfig | None
|
Optional embedder override ( |
procedural |
ProceduralMemoryConfig
|
Procedural memory auto-generation settings. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
backend(NotBlankStr) -
level(MemoryLevel) -
storage(MemoryStorageConfig) -
options(MemoryOptionsConfig) -
retrieval(MemoryRetrievalConfig) -
consolidation(ConsolidationConfig) -
embedder(EmbedderOverrideConfig | None) -
procedural(ProceduralMemoryConfig)
Validators:
-
_validate_backend_name
embedder
pydantic-field
¶
Optional embedder override. When set, overrides auto-selection for provider, model, and/or dims.
procedural
pydantic-field
¶
Procedural memory auto-generation settings. Controls whether failure-driven skill proposals are generated, which model to use, and quality thresholds.
Models¶
models
¶
Memory domain models.
Frozen Pydantic models for memory storage requests, entries, and
queries. MemoryStoreRequest is what callers pass to store();
MemoryEntry is what comes back from retrieve().
MemoryMetadata
pydantic-model
¶
Bases: BaseModel
Metadata associated with a memory entry.
Attributes:
| Name | Type | Description |
|---|---|---|
source |
NotBlankStr | None
|
Origin of the memory (task ID, conversation, etc.). |
confidence |
float
|
Confidence score for the memory (0.0 to 1.0). |
tags |
tuple[NotBlankStr, ...]
|
Categorization tags for filtering. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
source(NotBlankStr | None) -
confidence(float) -
tags(tuple[NotBlankStr, ...])
Validators:
-
_deduplicate_tags
MemoryStoreRequest
pydantic-model
¶
Bases: BaseModel
Input to MemoryBackend.store().
The backend assigns id and created_at; callers should not
fabricate them.
Attributes:
| Name | Type | Description |
|---|---|---|
category |
MemoryCategory
|
Memory type category. |
content |
NotBlankStr
|
Memory content text. |
metadata |
MemoryMetadata
|
Associated metadata. |
expires_at |
AwareDatetime | None
|
Optional expiration timestamp. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
category(MemoryCategory) -
content(NotBlankStr) -
metadata(MemoryMetadata) -
expires_at(AwareDatetime | None)
MemoryEntry
pydantic-model
¶
Bases: BaseModel
A memory entry returned from the backend.
Attributes:
| Name | Type | Description |
|---|---|---|
id |
NotBlankStr
|
Unique memory identifier (assigned by backend). |
agent_id |
NotBlankStr
|
Owning agent identifier. |
category |
MemoryCategory
|
Memory type category. |
content |
NotBlankStr
|
Memory content text. |
metadata |
MemoryMetadata
|
Associated metadata. |
created_at |
AwareDatetime
|
Creation timestamp. |
updated_at |
AwareDatetime | None
|
Last update timestamp. |
expires_at |
AwareDatetime | None
|
Optional expiration timestamp. |
relevance_score |
float | None
|
Relevance score set by backend on retrieval. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
id(NotBlankStr) -
agent_id(NotBlankStr) -
category(MemoryCategory) -
content(NotBlankStr) -
metadata(MemoryMetadata) -
created_at(AwareDatetime) -
updated_at(AwareDatetime | None) -
expires_at(AwareDatetime | None) -
relevance_score(float | None)
Validators:
-
_validate_timestamps
MemoryQuery
pydantic-model
¶
Bases: BaseModel
Query parameters for MemoryBackend.retrieve().
When text is None, the backend performs metadata-only
filtering (no semantic search).
Attributes:
| Name | Type | Description |
|---|---|---|
text |
NotBlankStr | None
|
Semantic search text ( |
categories |
frozenset[MemoryCategory] | None
|
Filter by memory categories. |
tags |
tuple[NotBlankStr, ...]
|
Filter by tags (AND semantics). |
min_relevance |
float
|
Minimum relevance score threshold. |
limit |
int
|
Maximum number of results. |
since |
AwareDatetime | None
|
Only memories created at or after this timestamp. |
until |
AwareDatetime | None
|
Only memories created before this timestamp. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
text(NotBlankStr | None) -
categories(frozenset[MemoryCategory] | None) -
tags(tuple[NotBlankStr, ...]) -
min_relevance(float) -
limit(int) -
since(AwareDatetime | None) -
until(AwareDatetime | None)
Validators:
-
_deduplicate_tags -
_validate_time_range
Capabilities¶
capabilities
¶
MemoryCapabilities protocol -- capability discovery.
Backends that implement MemoryCapabilities expose what features
they support, enabling runtime capability checks.
MemoryCapabilities
¶
Bases: Protocol
Capability discovery for memory backends.
Attributes:
| Name | Type | Description |
|---|---|---|
supported_categories |
frozenset[MemoryCategory]
|
Memory categories this backend supports. |
supports_graph |
bool
|
Whether graph-based memory is available. |
supports_temporal |
bool
|
Whether temporal tracking is available. |
supports_vector_search |
bool
|
Whether vector/semantic search is available. |
supports_shared_access |
bool
|
Whether cross-agent shared memory is available. |
max_memories_per_agent |
int | None
|
Maximum memories per agent, or |
Errors¶
errors
¶
Memory error hierarchy.
All memory-related errors inherit from MemoryError so callers can
catch the entire family with a single except clause.
Note: this shadows the built-in MemoryError (which signals
out-of-memory conditions in CPython). Within the synthorg
namespace the domain-specific meaning is unambiguous; callers outside
the package should import explicitly.
MemoryError
¶
Bases: Exception
Base exception for all memory operations.
MemoryConnectionError
¶
Bases: MemoryError
Raised when a backend connection cannot be established or is lost.
MemoryStoreError
¶
Bases: MemoryError
Raised when a store operation fails.
MemoryRetrievalError
¶
Bases: MemoryError
Raised when a retrieve or search operation fails.
MemoryNotFoundError
¶
Bases: MemoryError
Raised when a specific memory ID is not found.
Note: The MemoryBackend.get() protocol method returns None
for missing entries rather than raising this error. This exception
is available for concrete backend implementations that need to
signal "not found" in non-protocol internal methods or batch
operations.
MemoryConfigError
¶
Bases: MemoryError
Raised when memory configuration is invalid.
MemoryCapabilityError
¶
Bases: MemoryError
Raised when an unsupported operation is attempted for a backend.
Factory¶
factory
¶
Factory for creating memory backends from configuration.
Each company gets its own MemoryBackend instance. The factory
dispatches to concrete backend implementations based on
config.backend.
create_memory_backend
¶
Create a memory backend from configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
CompanyMemoryConfig
|
Memory configuration (includes backend selection and backend-specific settings). |
required |
embedder
|
Mem0EmbedderConfig | None
|
Backend-specific embedder configuration. Required
for the |
None
|
Returns:
| Type | Description |
|---|---|
MemoryBackend
|
A new, disconnected backend instance. The caller must call |
MemoryBackend
|
|
Raises:
| Type | Description |
|---|---|
MemoryConfigError
|
If the backend is not recognized or required configuration is missing. |
Source code in src/synthorg/memory/factory.py
Retriever¶
retriever
¶
Context injection strategy -- pre-retrieves and injects memories.
Orchestrates the full retrieval pipeline: backend query → ranking →
budget-fit → format. Implements MemoryInjectionStrategy protocol.
ContextInjectionStrategy
¶
ContextInjectionStrategy(
*, backend, config, shared_store=None, token_estimator=None, memory_filter=None
)
Context injection strategy -- pre-retrieves and injects memories.
Implements MemoryInjectionStrategy protocol. Orchestrates
the full pipeline: retrieve → rank → budget-fit → format.
Initialise the context injection strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
MemoryBackend
|
Memory backend for personal memories. |
required |
config
|
MemoryRetrievalConfig
|
Retrieval pipeline configuration. |
required |
shared_store
|
SharedKnowledgeStore | None
|
Optional shared knowledge store. |
None
|
token_estimator
|
TokenEstimator | None
|
Optional custom token estimator. |
None
|
memory_filter
|
MemoryFilterStrategy | None
|
Optional filter applied after ranking,
before formatting. When |
None
|
Source code in src/synthorg/memory/retriever.py
strategy_name
property
¶
Human-readable strategy identifier.
Returns:
| Type | Description |
|---|---|
str
|
|
prepare_messages
async
¶
Full pipeline: retrieve → rank → budget-fit → format.
Returns empty tuple on any failure (graceful degradation).
Never raises domain memory errors to the caller.
Re-raises builtins.MemoryError and RecursionError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
The agent requesting memories. |
required |
query_text
|
NotBlankStr
|
Text for semantic retrieval. |
required |
token_budget
|
int
|
Maximum tokens for memory content. |
required |
categories
|
frozenset[MemoryCategory] | None
|
Optional category filter. |
None
|
Returns:
| Type | Description |
|---|---|
tuple[ChatMessage, ...]
|
Tuple of |
Source code in src/synthorg/memory/retriever.py
159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 | |
get_tool_definitions
¶
Context injection provides no tools.
Returns:
| Type | Description |
|---|---|
tuple[ToolDefinition, ...]
|
Empty tuple. |
Retrieval Config¶
retrieval_config
¶
Memory retrieval pipeline configuration.
Frozen Pydantic config for the retrieval pipeline -- weights, thresholds, and strategy selection.
MemoryRetrievalConfig
pydantic-model
¶
Bases: BaseModel
Configuration for the memory retrieval and ranking pipeline.
Attributes:
| Name | Type | Description |
|---|---|---|
strategy |
InjectionStrategy
|
Which injection strategy to use. |
relevance_weight |
float
|
Weight for backend relevance score (0.0-1.0). |
recency_weight |
float
|
Weight for recency decay score (0.0-1.0). |
recency_decay_rate |
float
|
Exponential decay rate per hour. |
personal_boost |
float
|
Boost applied to personal over shared (0.0-1.0). |
min_relevance |
float
|
Minimum combined (relevance + recency) score to include. |
max_memories |
int
|
Maximum candidates to retrieve (1-100). |
include_shared |
bool
|
Whether to query SharedKnowledgeStore. |
default_relevance |
float
|
Score for entries missing relevance_score. |
injection_point |
InjectionPoint
|
Message role for context injection. |
non_inferable_only |
bool
|
When True, auto-creates a |
fusion_strategy |
FusionStrategy
|
Ranking fusion strategy -- LINEAR for single-source relevance+recency, RRF for multi-source ranked list merging. |
rrf_k |
int
|
RRF smoothing constant (1-1000, only used with RRF strategy). |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
strategy(InjectionStrategy) -
relevance_weight(float) -
recency_weight(float) -
recency_decay_rate(float) -
personal_boost(float) -
min_relevance(float) -
max_memories(int) -
include_shared(bool) -
default_relevance(float) -
injection_point(InjectionPoint) -
non_inferable_only(bool) -
fusion_strategy(FusionStrategy) -
rrf_k(int) -
query_reformulation_enabled(bool) -
max_reformulation_rounds(int)
Validators:
-
_validate_weight_sum -
_validate_rrf_k_strategy_consistency -
_validate_reformulation_not_supported -
_validate_personal_boost_rrf_consistency -
_validate_supported_strategy
min_relevance
pydantic-field
¶
Minimum combined (relevance + recency) score to include
default_relevance
pydantic-field
¶
Score for entries missing relevance_score
non_inferable_only
pydantic-field
¶
When True, only inject memories tagged as non-inferable
fusion_strategy
pydantic-field
¶
Ranking fusion strategy: linear for single-source relevance+recency, rrf for multi-source ranked list merging
rrf_k
pydantic-field
¶
RRF smoothing constant k (only used with RRF strategy)
query_reformulation_enabled
pydantic-field
¶
Reserved for future query reformulation support in the TOOL_BASED strategy. Not yet wired into the retrieval pipeline -- must remain False until implemented.
max_reformulation_rounds
pydantic-field
¶
Reserved for future query reformulation support (1-5). Currently unused until reformulation is wired.
Ranking¶
ranking
¶
Memory ranking -- scoring and sorting functions.
All functions are functionally pure (deterministic given the same inputs). Logging calls are the only side effect.
rank_memories scores entries via linear combination of relevance
and recency (single-source). fuse_ranked_lists merges multiple
pre-ranked lists via Reciprocal Rank Fusion (multi-source).
FusionStrategy
¶
Bases: StrEnum
Ranking fusion strategy selection.
Attributes:
| Name | Type | Description |
|---|---|---|
LINEAR |
Weighted linear combination of relevance and recency (default, for single-source scoring). |
|
RRF |
Reciprocal Rank Fusion for merging multiple ranked lists (for multi-source hybrid search). |
ScoredMemory
pydantic-model
¶
Bases: BaseModel
Memory entry with computed ranking scores.
Attributes:
| Name | Type | Description |
|---|---|---|
entry |
MemoryEntry
|
The original memory entry. |
relevance_score |
float
|
Relevance score after pipeline-specific transformations (0.0-1.0). |
recency_score |
float
|
Exponential decay based on age (0.0-1.0). Always 0.0 for RRF-produced results. |
combined_score |
float
|
Final ranking signal (0.0-1.0). For linear ranking this is a weighted combination; for RRF this is the normalized fusion score. |
is_shared |
bool
|
Whether this came from SharedKnowledgeStore. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
entry(MemoryEntry) -
relevance_score(float) -
recency_score(float) -
combined_score(float) -
is_shared(bool)
compute_recency_score
¶
Compute exponential recency decay score.
exp(-decay_rate * age_hours). Returns 1.0 for zero age,
decays toward 0.0 over time. Future timestamps are clamped to
1.0.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
created_at
|
datetime
|
When the memory was created. |
required |
now
|
datetime
|
Current timestamp for age calculation. |
required |
decay_rate
|
float
|
Exponential decay rate per hour. |
required |
Returns:
| Type | Description |
|---|---|
float
|
Recency score between 0.0 and 1.0. |
Source code in src/synthorg/memory/ranking.py
compute_combined_score
¶
Weighted linear combination of relevance and recency.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
relevance
|
float
|
Relevance score (0.0-1.0). |
required |
recency
|
float
|
Recency score (0.0-1.0). |
required |
relevance_weight
|
float
|
Weight for relevance. |
required |
recency_weight
|
float
|
Weight for recency. |
required |
Returns:
| Type | Description |
|---|---|
float
|
Combined score clamped to [0.0, 1.0]. When |
float
|
|
float
|
in [0.0, 1.0], the result is naturally bounded; the clamp |
float
|
guards against floating-point tolerance in the weight sum. |
Source code in src/synthorg/memory/ranking.py
rank_memories
¶
Score, merge, sort, filter, and truncate memory entries.
- Score personal entries (with
personal_boost). - Score shared entries (no boost).
- Merge both sets.
- Filter by
min_relevancethreshold oncombined_score. - Sort descending by
combined_score. - Truncate to
max_memories.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entries
|
tuple[MemoryEntry, ...]
|
Personal memory entries. |
required |
config
|
MemoryRetrievalConfig
|
Retrieval pipeline configuration. |
required |
now
|
datetime
|
Current timestamp for recency calculations. |
required |
shared_entries
|
tuple[MemoryEntry, ...]
|
Shared memory entries (no personal boost). |
()
|
Returns:
| Type | Description |
|---|---|
tuple[ScoredMemory, ...]
|
Sorted and filtered tuple of |
Source code in src/synthorg/memory/ranking.py
fuse_ranked_lists
¶
Merge multiple pre-ranked lists via Reciprocal Rank Fusion.
RRF_score(doc) = sum(1 / (k + rank_i)) across all lists
containing the document. Scores are min-max normalized to
[0.0, 1.0].
For RRF output, only combined_score is the meaningful
ranking signal. relevance_score preserves the entry's raw
backend relevance (or 0.0 if absent); recency_score is 0.0.
When the same entry ID appears in multiple lists, the first
MemoryEntry object encountered is retained.
Unlike rank_memories, this function does not apply a
min_relevance threshold -- callers are responsible for
post-filtering if needed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ranked_lists
|
tuple[tuple[MemoryEntry, ...], ...]
|
Each inner tuple is a pre-sorted ranked list of memory entries (best first). |
required |
k
|
int
|
RRF smoothing constant (default 60, must be >= 1). Smaller values amplify rank differences. |
60
|
max_results
|
int
|
Maximum entries to return (must be >= 1). |
20
|
Returns:
| Type | Description |
|---|---|
tuple[ScoredMemory, ...]
|
Sorted tuple of |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in src/synthorg/memory/ranking.py
301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 | |
Filter¶
filter
¶
Memory filter strategies for non-inferable principle enforcement.
Filters scored memories before injection into agent prompts. The
TagBasedMemoryFilter (initial D23 implementation) retains only
memories tagged with "non-inferable"; the PassthroughMemoryFilter
is a no-op for backward compatibility and testing.
Both satisfy the MemoryFilterStrategy runtime-checkable protocol.
MemoryFilterStrategy
¶
Bases: Protocol
Protocol for filtering scored memories before prompt injection.
filter_for_injection
¶
Filter memories suitable for injection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
memories
|
tuple[ScoredMemory, ...]
|
Ranked scored memories from the retrieval pipeline. |
required |
Returns:
| Type | Description |
|---|---|
tuple[ScoredMemory, ...]
|
Subset of memories that pass the filter. |
Source code in src/synthorg/memory/filter.py
TagBasedMemoryFilter
¶
Filter that retains only memories with a required tag.
The default required tag is "non-inferable" per D23. Memories
whose entry.metadata.tags do not contain the required tag are
excluded from prompt injection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
required_tag
|
str
|
Tag that must be present for a memory to pass. |
NON_INFERABLE_TAG
|
Source code in src/synthorg/memory/filter.py
strategy_name
property
¶
Human-readable name of the filter strategy.
Returns:
| Type | Description |
|---|---|
str
|
|
filter_for_injection
¶
Return only memories containing the required tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
memories
|
tuple[ScoredMemory, ...]
|
Ranked scored memories. |
required |
Returns:
| Type | Description |
|---|---|
tuple[ScoredMemory, ...]
|
Filtered tuple with only tagged memories. |
Source code in src/synthorg/memory/filter.py
PassthroughMemoryFilter
¶
No-op filter that returns all memories unchanged.
Useful for backward compatibility and testing -- all memories pass through without filtering.
strategy_name
property
¶
Human-readable name of the filter strategy.
Returns:
| Type | Description |
|---|---|
str
|
|
filter_for_injection
¶
Return all memories unchanged.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
memories
|
tuple[ScoredMemory, ...]
|
Ranked scored memories. |
required |
Returns:
| Type | Description |
|---|---|
tuple[ScoredMemory, ...]
|
The input tuple unchanged. |
Source code in src/synthorg/memory/filter.py
Formatter¶
formatter
¶
Memory context formatter -- converts ranked memories to ChatMessages.
Handles token budget enforcement via greedy packing: iterates by rank, skips entries that exceed the remaining budget, and continues with smaller entries to maximise context within the token limit.
MEMORY_BLOCK_START
module-attribute
¶
Delimiter marking the start of memory context.
MEMORY_BLOCK_END
module-attribute
¶
Delimiter marking the end of memory context.
format_memory_context
¶
Format ranked memories into ChatMessage(s), respecting token budget.
Uses greedy packing: iterates memories by rank order and includes each one if it fits within the remaining budget.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
memories
|
tuple[ScoredMemory, ...]
|
Pre-ranked memories (highest score first). |
required |
estimator
|
TokenEstimator
|
Token estimation implementation. |
required |
token_budget
|
int
|
Maximum tokens for the memory block. |
required |
injection_point
|
InjectionPoint
|
Role for the output message. |
SYSTEM
|
Returns:
| Type | Description |
|---|---|
ChatMessage
|
Tuple containing a single |
...
|
memories, or empty tuple if no memories fit or input is empty. |
Source code in src/synthorg/memory/formatter.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | |
Injection¶
injection
¶
Memory injection strategy protocol and supporting types.
Defines the pluggable MemoryInjectionStrategy protocol that
controls how memories reach agents during execution. Three
strategies are planned (context injection, tool-based, self-editing);
this module provides the protocol and enums for all.
ContextInjectionStrategy (in synthorg.memory.retriever) and
ToolBasedInjectionStrategy (in synthorg.memory.tool_retriever)
are implemented.
TokenEstimator is a local structural protocol that avoids a
memory -> engine import cycle (PromptTokenEstimator lives in
engine/prompt.py). Any object with estimate_tokens(str) -> int
satisfies it automatically.
InjectionStrategy
¶
Bases: StrEnum
Which injection strategy to use for surfacing memories.
Attributes:
| Name | Type | Description |
|---|---|---|
CONTEXT |
Pre-execution context injection (implemented). |
|
TOOL_BASED |
On-demand via agent tools (implemented). |
|
SELF_EDITING |
Structured read/write memory blocks (future). |
InjectionPoint
¶
Bases: StrEnum
Role of the injected memory message.
Attributes:
| Name | Type | Description |
|---|---|---|
SYSTEM |
Memory injected as a SYSTEM message (default). |
|
USER |
Memory injected as a USER message. |
TokenEstimator
¶
Bases: Protocol
Token estimation protocol (avoids memory → engine dependency).
Any object with estimate_tokens(str) -> int satisfies this
protocol structurally.
estimate_tokens
¶
Estimate the number of tokens in text.
Implementations must return non-negative values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
The text to estimate tokens for. |
required |
Returns:
| Type | Description |
|---|---|
int
|
Estimated token count (non-negative). |
Source code in src/synthorg/memory/injection.py
DefaultTokenEstimator
¶
Heuristic token estimator: len(text) // 4.
Suitable for rough budget enforcement when a model-specific tokenizer is unavailable.
estimate_tokens
¶
Estimate tokens as max(1, len(text) // 4) for non-empty text.
Returns 0 for empty text, at least 1 for any non-empty text.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
The text to estimate tokens for. |
required |
Returns:
| Type | Description |
|---|---|
int
|
Estimated token count (non-negative). |
Source code in src/synthorg/memory/injection.py
MemoryInjectionStrategy
¶
Bases: Protocol
Pluggable strategy for making memories available to agents.
Implementations determine how memories reach the agent:
- Context injection: pre-execution message injection.
- Tool-based: on-demand retrieval via agent tools.
- Self-editing: structured read/write memory blocks.
strategy_name
property
¶
Human-readable strategy identifier.
Returns:
| Type | Description |
|---|---|
str
|
Strategy name string. |
prepare_messages
async
¶
Return memory messages to inject into agent context.
Context injection returns ranked, formatted memories. Tool-based may return empty (tools handle retrieval). Self-editing returns the core memory block.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
The agent requesting memories. |
required |
query_text
|
NotBlankStr
|
Text to use for semantic retrieval. |
required |
token_budget
|
int
|
Maximum tokens for memory content. |
required |
Returns:
| Type | Description |
|---|---|
tuple[ChatMessage, ...]
|
Tuple of |
Source code in src/synthorg/memory/injection.py
get_tool_definitions
¶
Return tool definitions this strategy provides.
Context injection returns (). Tool-based returns
recall/search tools. Self-editing returns read/write tools.
Returns:
| Type | Description |
|---|---|
tuple[ToolDefinition, ...]
|
Tuple of |
Source code in src/synthorg/memory/injection.py
Org Memory¶
protocol
¶
OrgMemoryBackend protocol -- lifecycle + org memory operations.
Application code depends on this protocol for shared organizational memory storage and retrieval. Concrete backends implement this protocol to provide company-wide knowledge management.
OrgMemoryBackend
¶
Bases: Protocol
Structural interface for organizational memory backends.
Provides company-wide knowledge storage, retrieval, and lifecycle management. All operations require a connected backend.
Attributes:
| Name | Type | Description |
|---|---|---|
is_connected |
bool
|
Whether the backend has an active connection. |
backend_name |
NotBlankStr
|
Human-readable backend identifier. |
connect
async
¶
Establish connection to the org memory backend.
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If the connection fails. |
disconnect
async
¶
health_check
async
¶
Check whether the backend is healthy and responsive.
Returns:
| Type | Description |
|---|---|
bool
|
|
query
async
¶
Query organizational facts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query
|
OrgMemoryQuery
|
Query parameters. |
required |
Returns:
| Type | Description |
|---|---|
tuple[OrgFact, ...]
|
Matching facts ordered by relevance. |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryQueryError
|
If the query fails. |
Source code in src/synthorg/memory/org/protocol.py
write
async
¶
Write a new organizational fact.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
OrgFactWriteRequest
|
Fact content and category. |
required |
author
|
OrgFactAuthor
|
The author of the fact. |
required |
Returns:
| Type | Description |
|---|---|
NotBlankStr
|
The assigned fact ID. |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryAccessDeniedError
|
If write access is denied. |
OrgMemoryWriteError
|
If the write operation fails. |
Source code in src/synthorg/memory/org/protocol.py
config
¶
Org memory configuration models.
Frozen Pydantic models for organizational memory backend selection and behaviour settings.
ExtendedStoreConfig
pydantic-model
¶
Bases: BaseModel
Configuration for the extended org facts store.
Attributes:
| Name | Type | Description |
|---|---|---|
backend |
NotBlankStr
|
Store backend name (e.g. |
max_retrieved_per_query |
int
|
Maximum facts to retrieve per query. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
Validators:
-
_validate_backend_name
OrgMemoryConfig
pydantic-model
¶
Bases: BaseModel
Top-level organizational memory configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
backend |
NotBlankStr
|
Org memory backend name. |
core_policies |
tuple[NotBlankStr, ...]
|
Core policy texts injected into system prompts. |
extended_store |
ExtendedStoreConfig
|
Extended facts store configuration. |
write_access |
WriteAccessConfig
|
Write access control configuration. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
backend(NotBlankStr) -
core_policies(tuple[NotBlankStr, ...]) -
extended_store(ExtendedStoreConfig) -
write_access(WriteAccessConfig)
Validators:
-
_validate_backend_name
models
¶
Org memory domain models.
Frozen Pydantic models for organizational facts -- shared company-wide knowledge such as policies, ADRs, procedures, and conventions.
OrgFactAuthor
pydantic-model
¶
Bases: BaseModel
Author of an organizational fact.
If is_human is True, agent_id must be None.
If is_human is False, agent_id and seniority
are required.
Attributes:
| Name | Type | Description |
|---|---|---|
agent_id |
NotBlankStr | None
|
Agent identifier ( |
seniority |
SeniorityLevel | None
|
Agent seniority level ( |
is_human |
bool
|
Whether the author is a human operator. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
agent_id(NotBlankStr | None) -
seniority(SeniorityLevel | None) -
is_human(bool)
Validators:
-
_validate_author_consistency
OrgFact
pydantic-model
¶
Bases: BaseModel
An organizational fact -- a piece of shared company-wide knowledge.
Attributes:
| Name | Type | Description |
|---|---|---|
id |
NotBlankStr
|
Unique identifier for this fact. |
content |
NotBlankStr
|
The fact content text. |
category |
OrgFactCategory
|
Category classification. |
author |
OrgFactAuthor
|
Who created this fact. |
created_at |
AwareDatetime
|
Creation timestamp. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
id(NotBlankStr) -
content(NotBlankStr) -
category(OrgFactCategory) -
author(OrgFactAuthor) -
created_at(AwareDatetime)
OrgFactWriteRequest
pydantic-model
¶
Bases: BaseModel
Request to write a new organizational fact.
Attributes:
| Name | Type | Description |
|---|---|---|
content |
NotBlankStr
|
The fact content text. |
category |
OrgFactCategory
|
Category classification. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
OrgMemoryQuery
pydantic-model
¶
Bases: BaseModel
Query parameters for org memory retrieval.
Attributes:
| Name | Type | Description |
|---|---|---|
context |
NotBlankStr | None
|
Text search context ( |
categories |
frozenset[OrgFactCategory] | None
|
Filter by fact categories. |
limit |
int
|
Maximum number of results. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
context(NotBlankStr | None) -
categories(frozenset[OrgFactCategory] | None) -
limit(int)
store
¶
Org fact store -- protocol and SQLite implementation.
Self-contained storage for organizational facts, separate from the operational persistence layer.
OrgFactStore
¶
Bases: Protocol
Protocol for organizational fact persistence.
connect
async
¶
disconnect
async
¶
save
async
¶
Save an organizational fact.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fact
|
OrgFact
|
The fact to persist. |
required |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryWriteError
|
If the save fails. |
get
async
¶
Get a fact by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fact_id
|
NotBlankStr
|
The fact identifier. |
required |
Returns:
| Type | Description |
|---|---|
OrgFact | None
|
The fact, or |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryQueryError
|
If the query fails. |
Source code in src/synthorg/memory/org/store.py
query
async
¶
Query facts by category and/or text.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
categories
|
frozenset[OrgFactCategory] | None
|
Optional category filter. |
None
|
text
|
str | None
|
Optional text search (substring match). |
None
|
limit
|
int
|
Maximum results. |
5
|
Returns:
| Type | Description |
|---|---|
tuple[OrgFact, ...]
|
Matching facts. |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryQueryError
|
If the query fails. |
Source code in src/synthorg/memory/org/store.py
list_by_category
async
¶
List all facts in a category.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
category
|
OrgFactCategory
|
The category to list. |
required |
Returns:
| Type | Description |
|---|---|
tuple[OrgFact, ...]
|
All facts in the category. |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryQueryError
|
If the query fails. |
Source code in src/synthorg/memory/org/store.py
delete
async
¶
Delete a fact by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fact_id
|
NotBlankStr
|
Fact identifier. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
|
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryWriteError
|
If the delete fails. |
Source code in src/synthorg/memory/org/store.py
SQLiteOrgFactStore
¶
SQLite-backed organizational fact store.
Uses a separate database from the operational persistence layer to keep institutional knowledge decoupled.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
db_path
|
str
|
Path to the SQLite database file (or |
required |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If the path contains traversal. |
Source code in src/synthorg/memory/org/store.py
connect
async
¶
Open the SQLite database with WAL mode and ensure schema.
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If the connection fails. |
Source code in src/synthorg/memory/org/store.py
disconnect
async
¶
Close the database connection.
Source code in src/synthorg/memory/org/store.py
save
async
¶
Persist a fact to the database.
Uses INSERT (not INSERT OR REPLACE) to preserve the
append-only audit trail. Duplicate IDs raise
OrgMemoryWriteError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fact
|
OrgFact
|
The fact to save. |
required |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryWriteError
|
If the save fails or the ID exists. |
Source code in src/synthorg/memory/org/store.py
get
async
¶
Get a fact by its ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fact_id
|
NotBlankStr
|
Fact identifier. |
required |
Returns:
| Type | Description |
|---|---|
OrgFact | None
|
The fact or |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryQueryError
|
If the query fails. |
Source code in src/synthorg/memory/org/store.py
query
async
¶
Query facts by category and/or text content.
All dynamic values are passed as parameterized query parameters.
The WHERE clause is constructed from safe column/operator
constants only -- no user input is interpolated into SQL.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
categories
|
frozenset[OrgFactCategory] | None
|
Category filter. |
None
|
text
|
str | None
|
Text substring filter. |
None
|
limit
|
int
|
Maximum results. |
5
|
Returns:
| Type | Description |
|---|---|
tuple[OrgFact, ...]
|
Matching facts. |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryQueryError
|
If the query fails. |
Source code in src/synthorg/memory/org/store.py
list_by_category
async
¶
List all facts in a category.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
category
|
OrgFactCategory
|
The category to list. |
required |
Returns:
| Type | Description |
|---|---|
tuple[OrgFact, ...]
|
All facts in the category. |
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryQueryError
|
If the query fails. |
Source code in src/synthorg/memory/org/store.py
delete
async
¶
Delete a fact by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fact_id
|
NotBlankStr
|
Fact identifier. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
|
Raises:
| Type | Description |
|---|---|
OrgMemoryConnectionError
|
If not connected. |
OrgMemoryWriteError
|
If the delete fails. |
Source code in src/synthorg/memory/org/store.py
access_control
¶
Write access control for organizational memory.
Provides seniority-based and human-based write restriction models, configuration, and enforcement functions.
CategoryWriteRule
pydantic-model
¶
Bases: BaseModel
Write permission rule for a single fact category.
Attributes:
| Name | Type | Description |
|---|---|---|
allowed_seniority |
SeniorityLevel | None
|
Minimum seniority level for agent writes
( |
human_allowed |
bool
|
Whether human operators can write. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
allowed_seniority(SeniorityLevel | None) -
human_allowed(bool)
WriteAccessConfig
pydantic-model
¶
Bases: BaseModel
Write access configuration for all fact categories.
Attributes:
| Name | Type | Description |
|---|---|---|
rules |
dict[OrgFactCategory, CategoryWriteRule]
|
Per-category write rules (read-only mapping). |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
rules(dict[OrgFactCategory, CategoryWriteRule])
Validators:
-
_wrap_rules_readonly
check_write_access
¶
Check whether the given author may write to the given category.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
WriteAccessConfig
|
Write access configuration. |
required |
category
|
OrgFactCategory
|
Target fact category. |
required |
author
|
OrgFactAuthor
|
The author attempting the write. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
|
Source code in src/synthorg/memory/org/access_control.py
require_write_access
¶
Check write access and raise if denied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
WriteAccessConfig
|
Write access configuration. |
required |
category
|
OrgFactCategory
|
Target fact category. |
required |
author
|
OrgFactAuthor
|
The author attempting the write. |
required |
Raises:
| Type | Description |
|---|---|
OrgMemoryAccessDeniedError
|
If write is not permitted. |
Source code in src/synthorg/memory/org/access_control.py
Consolidation¶
config
¶
Memory consolidation configuration models.
Frozen Pydantic models for consolidation interval, retention, and archival settings.
RetentionConfig
pydantic-model
¶
Bases: BaseModel
Per-category retention configuration (company-level defaults).
These rules apply as the baseline for all agents. Individual agents
can override specific categories via
:attr:~synthorg.core.agent.MemoryConfig.retention_overrides.
Resolution order per category (highest priority first):
- Agent per-category override
- Company per-category rule (this config)
- Agent global default (
MemoryConfig.retention_days) - Company global default (
default_retention_days) - Keep forever (no expiry)
Attributes:
| Name | Type | Description |
|---|---|---|
rules |
tuple[RetentionRule, ...]
|
Per-category retention rules (unique categories). |
default_retention_days |
int | None
|
Default retention in days
( |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
rules(tuple[RetentionRule, ...]) -
default_retention_days(int | None)
Validators:
-
_validate_unique_categories
DualModeConfig
pydantic-model
¶
Bases: BaseModel
Configuration for dual-mode archival.
Controls density-aware archival: LLM abstractive summaries for sparse/conversational content vs extractive preservation (verbatim key facts + start/mid/end anchors) for dense/factual content.
Attributes:
| Name | Type | Description |
|---|---|---|
enabled |
bool
|
Whether dual-mode density classification is active.
When |
dense_threshold |
float
|
Density score threshold for DENSE classification (0.0 = classify everything as dense, 1.0 = everything sparse). |
summarization_model |
NotBlankStr | None
|
Model ID for abstractive summarization. |
max_summary_tokens |
int
|
Maximum tokens for LLM summary responses. |
max_facts |
int
|
Maximum number of extracted key facts for extractive mode. |
anchor_length |
int
|
Character length for each extractive anchor snippet (start/mid/end). |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
enabled(bool) -
dense_threshold(float) -
summarization_model(NotBlankStr | None) -
max_summary_tokens(int) -
max_facts(int) -
anchor_length(int)
Validators:
-
_validate_model_when_enabled
ArchivalConfig
pydantic-model
¶
Bases: BaseModel
Archival configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
enabled |
bool
|
Whether archival is enabled. |
age_threshold_days |
int
|
Minimum age in days before archival. |
dual_mode |
DualModeConfig
|
Dual-mode archival configuration. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
enabled(bool) -
age_threshold_days(int) -
dual_mode(DualModeConfig)
ConsolidationConfig
pydantic-model
¶
Bases: BaseModel
Top-level memory consolidation configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
interval |
ConsolidationInterval
|
How often to run consolidation. |
max_memories_per_agent |
int
|
Upper bound on memories per agent. |
retention |
RetentionConfig
|
Per-category retention settings. |
archival |
ArchivalConfig
|
Archival settings. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
interval(ConsolidationInterval) -
max_memories_per_agent(int) -
retention(RetentionConfig) -
archival(ArchivalConfig)
max_memories_per_agent
pydantic-field
¶
Upper bound on memories per agent
models
¶
Memory consolidation domain models.
Frozen Pydantic models for consolidation results, archival entries, retention rules, and dual-mode archival types.
ArchivalMode
¶
Bases: StrEnum
How a memory entry was archived during consolidation.
Determines the preservation strategy applied before archival.
ArchivalModeAssignment
pydantic-model
¶
Bases: BaseModel
Maps a removed memory entry to the archival mode applied.
Attributes:
| Name | Type | Description |
|---|---|---|
original_id |
NotBlankStr
|
ID of the removed memory entry. |
mode |
ArchivalMode
|
Archival mode applied to this entry. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
ArchivalIndexEntry
pydantic-model
¶
Bases: BaseModel
Maps a removed memory entry to its archival store ID.
Enables deterministic index-based restore: agents can look up their own archived entries by original ID without semantic search.
Attributes:
| Name | Type | Description |
|---|---|---|
original_id |
NotBlankStr
|
ID of the original memory entry. |
archival_id |
NotBlankStr
|
ID assigned by the archival store. |
mode |
ArchivalMode
|
Archival mode used for this entry. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
ConsolidationResult
pydantic-model
¶
Bases: BaseModel
Result of a memory consolidation run.
Attributes:
| Name | Type | Description |
|---|---|---|
removed_ids |
tuple[NotBlankStr, ...]
|
IDs of removed memory entries. |
summary_id |
NotBlankStr | None
|
ID of the summary entry (if created). |
archived_count |
int
|
Number of entries archived. |
consolidated_count |
int
|
Derived from |
mode_assignments |
tuple[ArchivalModeAssignment, ...]
|
Per-entry archival mode assignments (set by strategy, empty for strategies that don't classify density). |
archival_index |
tuple[ArchivalIndexEntry, ...]
|
Maps original memory IDs to archival store IDs (built by service after archival completes). |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
removed_ids(tuple[NotBlankStr, ...]) -
summary_id(NotBlankStr | None) -
archived_count(int) -
mode_assignments(tuple[ArchivalModeAssignment, ...]) -
archival_index(tuple[ArchivalIndexEntry, ...])
Validators:
-
_validate_archival_consistency
consolidated_count
property
¶
Number of memories consolidated (derived from removed_ids).
ArchivalEntry
pydantic-model
¶
Bases: BaseModel
An archived memory entry.
Attributes:
| Name | Type | Description |
|---|---|---|
original_id |
NotBlankStr
|
ID from the hot store. |
agent_id |
NotBlankStr
|
Owning agent identifier. |
content |
NotBlankStr
|
Memory content text. |
category |
MemoryCategory
|
Memory type category. |
metadata |
MemoryMetadata
|
Associated metadata. |
created_at |
AwareDatetime
|
Original creation timestamp. |
archived_at |
AwareDatetime
|
When this entry was archived. |
archival_mode |
ArchivalMode
|
How this entry was archived. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
original_id(NotBlankStr) -
agent_id(NotBlankStr) -
content(NotBlankStr) -
category(MemoryCategory) -
metadata(MemoryMetadata) -
created_at(AwareDatetime) -
archived_at(AwareDatetime) -
archival_mode(ArchivalMode)
Validators:
-
_validate_temporal_order
RetentionRule
pydantic-model
¶
Bases: BaseModel
Per-category retention rule.
Attributes:
| Name | Type | Description |
|---|---|---|
category |
MemoryCategory
|
Memory category this rule applies to. |
retention_days |
int
|
Number of days to retain memories. |
Config:
frozen:Trueallow_inf_nan:False
Fields:
-
category(MemoryCategory) -
retention_days(int)
strategy
¶
Consolidation strategy protocol.
Defines the interface for memory consolidation algorithms that compress and summarize older memories.
ConsolidationStrategy
¶
Bases: Protocol
Protocol for memory consolidation strategies.
Implementations receive a batch of memory entries and produce
a ConsolidationResult indicating which entries were merged,
removed, or summarized.
consolidate
async
¶
Consolidate a batch of memory entries.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entries
|
tuple[MemoryEntry, ...]
|
Memory entries to consolidate. |
required |
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
Returns:
| Type | Description |
|---|---|
ConsolidationResult
|
Result describing what was consolidated. |
Source code in src/synthorg/memory/consolidation/strategy.py
service
¶
Memory consolidation service.
Orchestrates retention cleanup, consolidation, archival, and max-memories enforcement into a single maintenance entry point.
MemoryConsolidationService
¶
Orchestrates memory consolidation, retention, and archival.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
MemoryBackend
|
Memory backend for CRUD operations. |
required |
config
|
ConsolidationConfig
|
Consolidation configuration. |
required |
strategy
|
ConsolidationStrategy | None
|
Optional consolidation strategy (skips consolidation
step if |
None
|
archival_store
|
ArchivalStore | None
|
Optional archival store (skips archival if
|
None
|
Source code in src/synthorg/memory/consolidation/service.py
run_consolidation
async
¶
Run memory consolidation for an agent.
Retrieves up to 1000 entries per invocation and applies the consolidation strategy, then archives removed entries if archival is configured and enabled. Per-entry archival failures are logged and skipped -- they do not abort the entire batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent whose memories to consolidate. |
required |
Returns:
| Type | Description |
|---|---|
ConsolidationResult
|
Consolidation result (including archival count). |
Source code in src/synthorg/memory/consolidation/service.py
enforce_max_memories
async
¶
Enforce the maximum memories limit for an agent.
Retrieves excess entries in batches (up to 1000 per query,
the MemoryQuery.limit cap) and deletes them. The entries
selected for deletion depend on the backend's default query
ordering -- typically oldest-first, but consult the concrete
backend for specifics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent to check. |
required |
Returns:
| Type | Description |
|---|---|
int
|
Number of entries deleted. |
Source code in src/synthorg/memory/consolidation/service.py
cleanup_retention
async
¶
Run retention cleanup for an agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent whose expired memories to clean up. |
required |
agent_category_overrides
|
Mapping[MemoryCategory, int] | None
|
Per-category retention overrides for this agent. |
None
|
agent_default_retention_days
|
int | None
|
Agent-level default retention in days. |
None
|
Returns:
| Type | Description |
|---|---|
int
|
Number of expired memories deleted. |
Source code in src/synthorg/memory/consolidation/service.py
run_maintenance
async
¶
Run full maintenance cycle for an agent.
Orchestrates: retention cleanup -> consolidation -> max enforcement.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent to maintain. |
required |
agent_category_overrides
|
Mapping[MemoryCategory, int] | None
|
Per-category retention overrides for this agent. |
None
|
agent_default_retention_days
|
int | None
|
Agent-level default retention in days. |
None
|
Returns:
| Type | Description |
|---|---|
ConsolidationResult
|
Consolidation result from the consolidation step. |
Source code in src/synthorg/memory/consolidation/service.py
retention
¶
Retention enforcer for memory lifecycle management.
Deletes memories that have exceeded their per-category retention period. Supports per-agent overrides that take priority over company-level defaults.
RetentionEnforcer
¶
Enforces per-category memory retention policies.
Queries for memories older than the configured retention period and deletes them from the backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
RetentionConfig
|
Retention configuration with per-category rules. |
required |
backend
|
MemoryBackend
|
Memory backend for querying and deleting. |
required |
Source code in src/synthorg/memory/consolidation/retention.py
cleanup_expired
async
¶
cleanup_expired(
agent_id,
now=None,
*,
agent_category_overrides=None,
agent_default_retention_days=None,
)
Delete memories that have exceeded their retention period.
Processes each category independently so that a failure in one category does not prevent cleanup of the remaining categories.
Processes up to 1000 expired entries per category per invocation. Multiple calls may be needed for categories with a large backlog.
When agent_category_overrides or agent_default_retention_days
is provided, per-agent retention rules are merged with company
defaults using the internal resolution chain
(_resolve_categories).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent whose memories to clean up. |
required |
now
|
datetime | None
|
Current time (defaults to UTC now). |
None
|
agent_category_overrides
|
Mapping[MemoryCategory, int] | None
|
Per-category retention overrides for this agent (mapping of category to days). |
None
|
agent_default_retention_days
|
int | None
|
Agent-level default retention in days. |
None
|
Returns:
| Type | Description |
|---|---|
int
|
Number of expired memories deleted. |
Source code in src/synthorg/memory/consolidation/retention.py
174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 | |
archival
¶
Archival store protocol for long-term memory storage.
Defines the protocol for moving memories from the hot store into cold (archival) storage, with search and restore capabilities.
ArchivalStore
¶
Bases: Protocol
Protocol for long-term memory archival storage.
Concrete implementations handle moving memories from the hot (active) store into cold storage for long-term preservation.
archive
async
¶
Archive a memory entry.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entry
|
ArchivalEntry
|
The archival entry to store. |
required |
Returns:
| Type | Description |
|---|---|
NotBlankStr
|
The assigned archive entry ID. |
search
async
¶
Search archived entries for a specific agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent whose archived entries to search. |
required |
query
|
MemoryQuery
|
Search parameters. |
required |
Returns:
| Type | Description |
|---|---|
tuple[ArchivalEntry, ...]
|
Matching archived entries owned by the agent. |
Source code in src/synthorg/memory/consolidation/archival.py
restore
async
¶
Restore a specific archived entry for a specific agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent who owns the archived entry. |
required |
entry_id
|
NotBlankStr
|
The archive entry ID. |
required |
Returns:
| Type | Description |
|---|---|
ArchivalEntry | None
|
The archived entry, or |
ArchivalEntry | None
|
owned by the agent. |
Source code in src/synthorg/memory/consolidation/archival.py
count
async
¶
Count archived entries for an agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
NotBlankStr
|
Agent identifier. |
required |
Returns:
| Type | Description |
|---|---|
int
|
Number of archived entries. |
simple_strategy
¶
Simple consolidation strategy.
Groups entries by category, keeps the most relevant entry per group (with most recent as tiebreaker), and creates a summary entry from the rest.
SimpleConsolidationStrategy
¶
Simple memory consolidation strategy.
Groups entries by category. For each group exceeding a threshold, keeps the entry with the highest relevance score (with most recent as tiebreaker), creates a summary entry from the rest, and deletes consolidated entries from the backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
MemoryBackend
|
Memory backend for storing summaries. |
required |
group_threshold
|
int
|
Minimum group size to trigger consolidation (must be >= 2). |
_DEFAULT_GROUP_THRESHOLD
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in src/synthorg/memory/consolidation/simple_strategy.py
consolidate
async
¶
Consolidate entries by grouping and summarizing per category.
Groups with fewer than group_threshold entries are left
unchanged.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entries
|
tuple[MemoryEntry, ...]
|
Memory entries to consolidate. |
required |
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
Returns:
| Type | Description |
|---|---|
ConsolidationResult
|
Result describing what was consolidated. |
Source code in src/synthorg/memory/consolidation/simple_strategy.py
density
¶
Content density classification for dual-mode archival.
Heuristic-based classifier that determines whether memory content is sparse (conversational, narrative) or dense (code, structured data, identifiers). Classification is deterministic -- no LLM calls.
ContentDensity
¶
Bases: StrEnum
Classification of memory content density.
Determines the archival mode: sparse content receives abstractive LLM summarization, dense content receives extractive preservation.
DensityClassifier
¶
Heuristic content density classifier.
Classifies text as SPARSE or DENSE based on structural signals: code patterns, structured data markers, identifier density, numeric density, and line structure.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dense_threshold
|
float
|
Score threshold for DENSE classification (0.0-1.0). Lower values classify more content as dense. |
0.5
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in src/synthorg/memory/consolidation/density.py
classify
¶
Classify content density.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
content
|
str
|
Text to classify. |
required |
Returns:
| Type | Description |
|---|---|
ContentDensity
|
DENSE if score >= threshold, SPARSE otherwise. |
Source code in src/synthorg/memory/consolidation/density.py
classify_batch
¶
Classify density for a batch of memory entries.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entries
|
tuple[MemoryEntry, ...]
|
Memory entries to classify. |
required |
Returns:
| Type | Description |
|---|---|
tuple[tuple[MemoryEntry, ContentDensity], ...]
|
Tuple of (entry, density) pairs in input order. |
Source code in src/synthorg/memory/consolidation/density.py
extractive
¶
Extractive preservation for dense memory content.
For dense content (code, structured data, identifiers), summarization is catastrophically lossy. Instead, this module extracts verbatim key facts and structural anchors (start/mid/end) to preserve the most important information.
ExtractivePreserver
¶
Extracts key facts and structural anchors from dense content.
For dense content (code, structured data, IDs), summarization is catastrophically lossy. Instead, this preserver extracts verbatim key facts (identifiers, URLs, version numbers, key-value pairs) and structural anchors (start/mid/end snippets of the original).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_facts
|
int
|
Maximum number of key facts to extract. |
20
|
anchor_length
|
int
|
Character length of each anchor snippet. |
150
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in src/synthorg/memory/consolidation/extractive.py
extract
¶
Extract key facts and anchors from dense content.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
content
|
str
|
The dense text to extract from. |
required |
Returns:
| Type | Description |
|---|---|
str
|
Structured text block with extracted facts and anchors. |
Source code in src/synthorg/memory/consolidation/extractive.py
abstractive
¶
Abstractive summarizer for sparse memory content.
Uses an LLM (via CompletionProvider) to generate concise summaries
of conversational/narrative memory content. Falls back to truncation
if the LLM call fails.
AbstractiveSummarizer
¶
LLM-based abstractive summarizer for sparse content.
Uses a CompletionProvider to generate concise summaries of
conversational/narrative memory content. Falls back to truncation
if the LLM call fails with a retryable error.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
provider
|
CompletionProvider
|
Completion provider for LLM calls. |
required |
model
|
NotBlankStr
|
Model identifier to use for summarization. |
required |
max_summary_tokens
|
int
|
Maximum tokens for the summary response. |
200
|
temperature
|
float
|
Sampling temperature for summarization. |
0.3
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in src/synthorg/memory/consolidation/abstractive.py
summarize
async
¶
Generate an abstractive summary of the given content.
Falls back to truncation if the LLM call fails with a retryable error or returns empty content. Non-retryable provider errors (authentication, invalid model) propagate.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
content
|
str
|
The sparse/conversational text to summarize. |
required |
Returns:
| Type | Description |
|---|---|
str
|
Summary text. |
Source code in src/synthorg/memory/consolidation/abstractive.py
summarize_batch
async
¶
Summarize multiple entries concurrently.
Each entry is summarized independently via asyncio.TaskGroup.
Failures for individual entries fall back to truncation without
aborting the batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entries
|
tuple[MemoryEntry, ...]
|
Memory entries to summarize. |
required |
Returns:
| Type | Description |
|---|---|
tuple[tuple[NotBlankStr, str], ...]
|
Tuple of |
Source code in src/synthorg/memory/consolidation/abstractive.py
dual_mode_strategy
¶
Dual-mode consolidation strategy.
Density-aware consolidation: classifies entries as sparse or dense, then applies LLM abstractive summarization (sparse) or extractive preservation (dense) accordingly.
DualModeConsolidationStrategy
¶
DualModeConsolidationStrategy(
*,
backend,
classifier,
extractor,
summarizer,
group_threshold=_DEFAULT_GROUP_THRESHOLD,
)
Density-aware consolidation strategy.
Classifies entries by content density and applies the appropriate archival mode: LLM abstractive summarization for sparse content, extractive key-fact preservation for dense content.
Groups entries by category. For each group exceeding the threshold, classifies density per-entry, determines the group mode by majority vote, selects the best entry to keep, and processes the rest.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
MemoryBackend
|
Memory backend for storing summaries/extractions. |
required |
classifier
|
DensityClassifier
|
Density classifier instance. |
required |
extractor
|
ExtractivePreserver
|
Extractive preserver instance. |
required |
summarizer
|
AbstractiveSummarizer
|
Abstractive summarizer instance. |
required |
group_threshold
|
int
|
Minimum group size to trigger consolidation (must be >= 2). |
_DEFAULT_GROUP_THRESHOLD
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in src/synthorg/memory/consolidation/dual_mode_strategy.py
consolidate
async
¶
Consolidate entries using density-aware dual-mode approach.
Groups entries by category, classifies density, selects archival mode by majority vote, then processes entries accordingly.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entries
|
tuple[MemoryEntry, ...]
|
Memory entries to consolidate. |
required |
agent_id
|
NotBlankStr
|
Owning agent identifier. |
required |
Returns:
| Type | Description |
|---|---|
ConsolidationResult
|
Result describing what was consolidated. |