Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Engine Services

The engine hosts 37 services organized into four groups: resource management, intelligence, data, and external integration.

Service Groups

Resource Management

ServicePurpose
OrchestratorServicevLLM endpoint lifecycle and GPU allocation
SchedulerServicePriority-based job queue with XAI budget
HealthServiceGPU and endpoint health monitoring
AgendaTrackerTracks scheduled endpoint transitions for makespan operations

Intelligence

ServicePurpose
EvolutionServiceAgent prompt optimization via APO
CognitionServiceAutonomous thought generation (every 4h)
CLTServiceCross-Layer Transcoder feature extraction
TopologyServiceSemantic attractor detection and drift
NGRCPredictorReservoir computing for temporal prediction

Data

ServicePurpose
DatasetServiceNiFi SoM dataset generation
FlowSchedulerServiceMetaflow pipeline scheduling
KBServiceKnowledge base CRUD operations
LineageServiceProvenance tracking

External Integration

ServicePurpose
XBookmarksServiceX (Twitter) bookmark synchronization

Service Registration

Services register with the engine during startup. Each service implements a standard lifecycle:

class SomeService:
    async def start(self) -> None:
        """Initialize resources, start background tasks."""
        ...

    async def stop(self) -> None:
        """Clean shutdown, release resources."""
        ...

Background Tasks

Several services run scheduled background tasks:

TaskServiceSchedulePurpose
cognition_cycleCognitionServiceEvery 4hPattern detection in KB activity
self_observationCognitionServiceEvery 8hMeta-cognitive reflection
engine_auditCognitionServiceEvery 12hSystem health analysis
Evolution cycleEvolutionServiceGPU idleAgent prompt optimization
Health checkHealthServiceEvery 30sEndpoint liveness

Service Dependencies

Services form a dependency graph. The orchestrator and scheduler are foundational — most other services depend on them for inference access:

OrchestratorService → vLLM Controller → GPU Pool
SchedulerService → OrchestratorService
EvolutionService → SchedulerService
CognitionService → SchedulerService
HealthService → GPU Pool (via pynvml)
TopologyService → CLTService

See the individual service chapters for implementation details: