ForgeClient API

The ForgeClient class is the main interface for interacting with the Glyph Forge API.

class glyph_forge.core.client.forge_client.ForgeClient(api_key=None, base_url=None, *, timeout=30.0)[source]

Bases: object

Local SDK-based client for Glyph Forge.

Uses the Glyph SDK directly to build and run schemas locally. No API key required - all processing happens on your machine.

Parameters:
  • api_key (Optional[str]) – Deprecated. No longer used (kept for backwards compatibility).

  • base_url (Optional[str]) – Deprecated. No longer used (kept for backwards compatibility).

  • timeout (float) – Deprecated. No longer used (kept for backwards compatibility).

Example

>>> from glyph_forge import ForgeClient, create_workspace
>>> ws = create_workspace()
>>> client = ForgeClient()
>>> schema = client.build_schema_from_docx(ws, docx_path="sample.docx")
__init__(api_key=None, base_url=None, *, timeout=30.0)[source]

Initialize ForgeClient.

Parameters:
__enter__()[source]
__exit__(exc_type, exc_val, exc_tb)[source]
close()[source]

Close the client and cleanup resources.

build_schema_from_docx(ws, *, docx_path, save_as=None, include_artifacts=False)[source]

Build a schema from a DOCX file using the local SDK.

Parameters:
  • ws (Any) – Workspace instance for saving artifacts

  • docx_path (str) – Path to DOCX file (absolute or CWD-relative)

  • save_as (Optional[str]) – Optional name to save schema JSON (without .json extension)

  • include_artifacts (bool) – If True, save tagged DOCX + unzipped files (default: False)

Return type:

Dict[str, Any]

Returns:

Schema dict

Raises:

ForgeClientError – File not found or processing error

Example

>>> schema = client.build_schema_from_docx(
...     ws,
...     docx_path="sample.docx",
...     save_as="my_schema"
... )
run_schema(ws, *, schema, plaintext, dest_name='assembled_output.docx')[source]

Run a schema with plaintext to generate a DOCX using the local SDK.

Parameters:
  • ws (Any) – Workspace instance

  • schema (Dict[str, Any]) – Schema dict (from build_schema_from_docx or loaded JSON)

  • plaintext (str) – Input text content

  • dest_name (str) – Name for output DOCX file (saved in output_docx directory)

Return type:

str

Returns:

Local path to saved DOCX file

Raises:

ForgeClientError – Failed to run schema or save DOCX

Example

>>> docx_path = client.run_schema(
...     ws,
...     schema=schema,
...     plaintext="Sample text...",
...     dest_name="output.docx"
... )
run_schema_bulk(ws, *, schema, plaintexts, max_concurrent=5, dest_name_pattern='output_{index}.docx')[source]

Run a schema with multiple plaintexts to generate multiple DOCX files.

Parameters:
  • ws (Any) – Workspace instance

  • schema (Dict[str, Any]) – Schema dict (from build_schema_from_docx or loaded JSON)

  • plaintexts (list[str]) – List of plaintext strings to process

  • max_concurrent (int) – Ignored in local SDK mode (processed sequentially)

  • dest_name_pattern (str) – Pattern for output filenames. Use {index} placeholder

Return type:

Dict[str, Any]

Returns:

Dict containing results with status, paths, and timing info

Example

>>> result = client.run_schema_bulk(
...     ws,
...     schema=schema,
...     plaintexts=["Text 1...", "Text 2...", "Text 3..."],
...     dest_name_pattern="invoice_{index}.docx"
... )
compress_schema(ws, *, schema, save_as=None)[source]

Compress a schema by deduplicating redundant pattern descriptors.

Parameters:
  • ws (Any) – Workspace instance

  • schema (Dict[str, Any]) – Schema dict to compress

  • save_as (Optional[str]) – Optional name to save compressed schema JSON

Return type:

Dict[str, Any]

Returns:

Dict containing compressed_schema and stats

Example

>>> result = client.compress_schema(
...     ws,
...     schema=schema,
...     save_as="compressed_schema"
... )
intake_plaintext_text(ws, *, text, save_as=None, **opts)[source]

Intake plaintext via text string (local processing).

Parameters:
  • ws (Any) – Workspace instance

  • text (str) – Plaintext content to intake

  • save_as (Optional[str]) – Optional name to save intake result JSON

  • **opts (Any) – Additional options (unicode_form, strip_zero_width, etc.)

Return type:

Dict[str, Any]

Returns:

Intake result dict

Example

>>> result = client.intake_plaintext_text(
...     ws,
...     text="Sample text...",
...     save_as="intake_result"
... )
intake_plaintext_file(ws, *, file_path, save_as=None, **opts)[source]

Intake plaintext from file (local processing).

Parameters:
  • ws (Any) – Workspace instance

  • file_path (str) – Path to plaintext file

  • save_as (Optional[str]) – Optional name to save intake result JSON

  • **opts (Any) – Additional options

Return type:

Dict[str, Any]

Returns:

Intake result dict

Example

>>> result = client.intake_plaintext_file(
...     ws,
...     file_path="sample.txt",
...     save_as="intake_result"
... )
ask(*, message, tenant_id=None, user_id=None, conversation_id=None, conversation_history=None, current_schema=None, current_plaintext=None, current_document=None, real_time=False, strict_validation=False)[source]

Send a message to the Glyph Agent multi-agent system via API.

This endpoint orchestrates: 1. Intent classification 2. Agent routing (schema, plaintext, validation, conversation) 3. Multi-step workflows 4. Markup application 5. Conversation state management

Parameters:
  • message (str) – The message to send to the agent (required)

  • tenant_id (Optional[str]) – Tenant identifier for rate limiting

  • user_id (Optional[str]) – User identifier for rate limiting

  • conversation_id (Optional[str]) – Conversation ID for context tracking

  • conversation_history (Optional[List[Dict[str, str]]]) – Previous conversation messages for context List of dicts with ‘role’ and ‘content’ keys

  • current_schema (Optional[Dict[str, Any]]) – Current schema state (for incremental modifications)

  • current_plaintext (Optional[str]) – Current plaintext content (for incremental modifications)

  • current_document (Optional[Dict[str, Any]]) – Legacy combined document state

  • real_time (bool) – Enable real-time sandbox updates

  • strict_validation (bool) – Enable strict validation mode

Returns:

  • response: The agent’s response message

  • document: Generated or modified document (if applicable)

  • schema/document_schema: Document schema (if schema request)

  • plaintext: Generated plaintext content

  • validation_result: Validation results (if validation request)

  • metadata: Additional metadata (intent, routing, etc.)

  • usage: Token usage information

  • conversation_id: Conversation ID for tracking

Return type:

Dict containing

Raises:

Example

>>> client = ForgeClient(api_key="your-api-key")
>>> response = client.ask(
...     message="Create a schema for a quarterly report",
...     user_id="user123"
... )
>>> print(response['response'])
>>> if 'schema' in response:
...     print(f"Schema generated: {len(response['schema']['pattern_descriptors'])} descriptors")

Core Methods

Schema Building

ForgeClient.build_schema_from_docx(ws, *, docx_path, save_as=None, include_artifacts=False)[source]

Build a schema from a DOCX file using the local SDK.

Parameters:
  • ws (Any) – Workspace instance for saving artifacts

  • docx_path (str) – Path to DOCX file (absolute or CWD-relative)

  • save_as (Optional[str]) – Optional name to save schema JSON (without .json extension)

  • include_artifacts (bool) – If True, save tagged DOCX + unzipped files (default: False)

Return type:

Dict[str, Any]

Returns:

Schema dict

Raises:

ForgeClientError – File not found or processing error

Example

>>> schema = client.build_schema_from_docx(
...     ws,
...     docx_path="sample.docx",
...     save_as="my_schema"
... )

Schema Running

ForgeClient.run_schema(ws, *, schema, plaintext, dest_name='assembled_output.docx')[source]

Run a schema with plaintext to generate a DOCX using the local SDK.

Parameters:
  • ws (Any) – Workspace instance

  • schema (Dict[str, Any]) – Schema dict (from build_schema_from_docx or loaded JSON)

  • plaintext (str) – Input text content

  • dest_name (str) – Name for output DOCX file (saved in output_docx directory)

Return type:

str

Returns:

Local path to saved DOCX file

Raises:

ForgeClientError – Failed to run schema or save DOCX

Example

>>> docx_path = client.run_schema(
...     ws,
...     schema=schema,
...     plaintext="Sample text...",
...     dest_name="output.docx"
... )

Bulk Processing

ForgeClient.run_schema_bulk(ws, *, schema, plaintexts, max_concurrent=5, dest_name_pattern='output_{index}.docx')[source]

Run a schema with multiple plaintexts to generate multiple DOCX files.

Parameters:
  • ws (Any) – Workspace instance

  • schema (Dict[str, Any]) – Schema dict (from build_schema_from_docx or loaded JSON)

  • plaintexts (list[str]) – List of plaintext strings to process

  • max_concurrent (int) – Ignored in local SDK mode (processed sequentially)

  • dest_name_pattern (str) – Pattern for output filenames. Use {index} placeholder

Return type:

Dict[str, Any]

Returns:

Dict containing results with status, paths, and timing info

Example

>>> result = client.run_schema_bulk(
...     ws,
...     schema=schema,
...     plaintexts=["Text 1...", "Text 2...", "Text 3..."],
...     dest_name_pattern="invoice_{index}.docx"
... )

Schema Compression

ForgeClient.compress_schema(ws, *, schema, save_as=None)[source]

Compress a schema by deduplicating redundant pattern descriptors.

Parameters:
  • ws (Any) – Workspace instance

  • schema (Dict[str, Any]) – Schema dict to compress

  • save_as (Optional[str]) – Optional name to save compressed schema JSON

Return type:

Dict[str, Any]

Returns:

Dict containing compressed_schema and stats

Example

>>> result = client.compress_schema(
...     ws,
...     schema=schema,
...     save_as="compressed_schema"
... )

Plaintext Intake

ForgeClient.intake_plaintext_text(ws, *, text, save_as=None, **opts)[source]

Intake plaintext via text string (local processing).

Parameters:
  • ws (Any) – Workspace instance

  • text (str) – Plaintext content to intake

  • save_as (Optional[str]) – Optional name to save intake result JSON

  • **opts (Any) – Additional options (unicode_form, strip_zero_width, etc.)

Return type:

Dict[str, Any]

Returns:

Intake result dict

Example

>>> result = client.intake_plaintext_text(
...     ws,
...     text="Sample text...",
...     save_as="intake_result"
... )
ForgeClient.intake_plaintext_file(ws, *, file_path, save_as=None, **opts)[source]

Intake plaintext from file (local processing).

Parameters:
  • ws (Any) – Workspace instance

  • file_path (str) – Path to plaintext file

  • save_as (Optional[str]) – Optional name to save intake result JSON

  • **opts (Any) – Additional options

Return type:

Dict[str, Any]

Returns:

Intake result dict

Example

>>> result = client.intake_plaintext_file(
...     ws,
...     file_path="sample.txt",
...     save_as="intake_result"
... )

Client Management

ForgeClient.close()[source]

Close the client and cleanup resources.

Usage Examples

Basic Schema Build and Run

from glyph_forge import ForgeClient, create_workspace

# Initialize
client = ForgeClient(api_key="gf_live_...")
ws = create_workspace()

# Build schema
schema = client.build_schema_from_docx(
    ws,
    docx_path="template.docx",
    save_as="my_schema"
)

# Run schema
output = client.run_schema(
    ws,
    schema=schema,
    plaintext="Content here...",
    dest_name="output.docx"
)

With Context Manager

from glyph_forge import ForgeClient, create_workspace

ws = create_workspace()

with ForgeClient(api_key="gf_live_...") as client:
    schema = client.build_schema_from_docx(
        ws,
        docx_path="template.docx"
    )

Bulk Processing

# Process multiple documents at once
plaintexts = ["Text 1...", "Text 2...", "Text 3..."]

result = client.run_schema_bulk(
    ws,
    schema=schema,
    plaintexts=plaintexts,
    max_concurrent=5,
    dest_name_pattern="output_{index}.docx"
)

print(f"Processed {result['successful']} of {result['total']}")

Schema Compression

# Compress schema to optimize size
result = client.compress_schema(
    ws,
    schema=schema,
    save_as="compressed_schema"
)

print(f"Reduced from {result['stats']['original_count']} "
      f"to {result['stats']['compressed_count']} pattern descriptors")