College Project 1oic
College Project 1oic
Table of Contents
1. Introduction
2. System Requirements
3. AI Assistants Overview
4. Scheduling System
5. Resource Allocation
6. Core Framework Implementation
7. NAMI: System Control Assistant
8. RUSH: Interactive Lecture Assistant
9. VEX: Automated Vlog Editor
10. Pause/Resume Workflow
11. Troubleshooting
12. Performance Optimization
13. Future Expansion
14. Stress Testing Results
Introduction
This guide will walk you through implementing three powerful AI
assistants on your Mac Mini with an efficient pause/resume workflow.
This implementation includes actual code that you can use to build a
sophisticated system of AI assistants that work together seamlessly.
System Requirements
This section outlines everything you'll need to get started with your AI
assistants setup:
Hardware Requirements
10. Make sure each assistant works properly before setting up the
scheduling system
13. Once all assistants are working, set up the scheduling system
14. Test transitions between assistants
This step-by-step approach ensures you won't get overwhelmed and
can address any issues as they arise.
AI Assistants Overview
Each assistant serves a specific purpose and operates during
designated hours:
RAM
Assistant Function Operating Hours
Usage
Scheduling System
The scheduling system ensures that each assistant operates during
its designated hours, with automatic transitions between them. This
approach optimizes resource usage and ensures that intensive tasks
like video editing happen during off-hours.
1. Time-Based Activation:
2. NAMI activates at 7:00 AM and runs until 12:00 AM (midnight)
3. RUSH activates at 8:00 AM and runs until 11:00 PM
(overlapping with NAMI)
9. Manual Override:
2. For each assistant, you'll create a plist file that defines when it
should run (detailed instructions in each assistant's section)
Resource Allocation
The system intelligently allocates RAM resources based on which
assistants are currently active:
1. Dynamic Allocation:
2. When NAMI runs alone (7:00 AM - 8:00 AM), it can use up to 3
GB RAM
3. When NAMI and RUSH overlap (8:00 AM - 11:00 PM), NAMI is
limited to 2 GB RAM
4. When NAMI runs alone (11:00 PM - 12:00 AM), it can use up to
3 GB RAM
6. Resource Monitoring:
import os
import json
import time
from typing import Dict, List, Optional, Any
class ModelState:
"""Class representing the state of a model"""
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'ModelState':
"""Create state from dictionary"""
state = cls(data["model_name"])
state.active = data["active"]
state.current_task = data["current_task"]
state.task_progress = data["task_progress"]
state.checkpoint_path = data["checkpoint_path"]
state.last_updated = data["last_updated"]
return state
class StateManager:
"""Manager for model states"""
def __init__(self, state_dir: str = "/Users/Shared/ai_models/s
"""Initialize state manager"""
self.state_dir = state_dir
self.models = ["rush", "nami", "vex"]
self.active_model = None
# Initialize states
self._initialize_states()
def _initialize_states(self):
"""Initialize states for all models"""
for model_name in self.models:
state_path = os.path.join(self.state_dir, f"{model_nam
if os.path.exists(state_path):
with open(state_path, 'r') as f:
data = json.load(f)
return ModelState.from_dict(data)
else:
# Create new state if it doesn't exist
state = ModelState(model_name)
self._save_state(state)
return state
return state
return state
return state
import os
import subprocess
import json
import psutil
from typing import Dict, Any, Optional
class MemoryManager:
"""Manager for system memory resources"""
def __init__(self):
"""Initialize memory manager"""
# Define memory limits for models (in MB)
self.memory_limits = {
# When active
"active": {
"rush": 16384, # 16GB
"nami": 3072, # 3GB
"vex": 20480, # 20GB
"default": 8192 # 8GB
},
# When paused
"paused": {
"rush": 2048, # 2GB
"nami": 1024, # 1GB
"vex": 2048, # 2GB
"default": 1024 # 1GB
}
}
if model_name in self.memory_limits[category]:
return self.memory_limits[category][model_name]
else:
return self.memory_limits[category]["default"]
return {
"pressure": pressure,
"stats": stats
}
return True
What this code does: - Defines memory limits for each assistant in
both active and paused states - Provides methods to get system
memory information and calculate memory pressure - Implements a
method to adjust memory limits for assistants based on their state -
Uses macOS-specific tools like vm_stat to monitor memory usage
3. Checkpoint Manager Implementation
import os
import json
import pickle
import time
import uuid
from typing import Dict, Any, Tuple, Optional, List
class CheckpointManager:
"""Manager for model checkpoints"""
return f"{model_name}_{task_name}_{timestamp}_{unique_id}"
def save_checkpoint(self, model_name: str, task_name: str, che
task_data: Optional[Dict[str, Any]] = None)
"""
Save a checkpoint
Args:
model_name: Name of the model
task_name: Name of the task
checkpoint_data: Data to save in the checkpoint
task_data: Additional task-specific data
Returns:
Tuple of (checkpoint_id, checkpoint_path)
"""
# Create model directory if it doesn't exist
model_dir = os.path.join(self.checkpoint_dir, model_name)
os.makedirs(model_dir, exist_ok=True)
# Create checkpoint ID
checkpoint_id = self.create_checkpoint_id(model_name, task
# Save metadata
metadata = {
"model_name": model_name,
"task_name": task_name,
"created_at": time.time(),
"checkpoint_id": checkpoint_id,
"checkpoint_path": checkpoint_path,
"task_data": task_data
}
metadata_path = os.path.join(model_dir, f"{checkpoint_id}.
with open(metadata_path, 'w') as f:
json.dump(metadata, f, indent=2)
Args:
checkpoint_id: ID of the checkpoint to load
model_name: Name of the model (if loading latest check
task_name: Name of the task (if loading latest checkpo
Returns:
Tuple of (metadata, checkpoint_data)
"""
if checkpoint_id:
# Find checkpoint by ID
for model in ["rush", "nami", "vex"]:
model_dir = os.path.join(self.checkpoint_dir, mode
metadata_path = os.path.join(model_dir, f"{checkpo
if os.path.exists(metadata_path):
# Load metadata
with open(metadata_path, 'r') as f:
metadata = json.load(f)
if not os.path.exists(model_dir):
raise ValueError(f"No checkpoints found for model
if metadata["task_name"] == task_name:
metadata_files.append((metadata_path, meta
if not metadata_files:
raise ValueError(f"No checkpoints found for model
else:
raise ValueError("Either checkpoint_id or (model_name
return False
Returns:
Number of checkpoints deleted
"""
deleted_count = 0
cutoff_time = time.time() - (days * 24 * 60 * 60)
if not os.path.exists(model_dir):
continue
# Find all metadata files
for filename in os.listdir(model_dir):
if filename.endswith(".json"):
metadata_path = os.path.join(model_dir, filena
if os.path.exists(checkpoint_path):
os.remove(checkpoint_path)
os.remove(metadata_path)
deleted_count += 1
return deleted_count
What this code does: - Creates and manages checkpoints for each
assistant - Provides methods to save and load checkpoint data -
Implements a cleanup mechanism to remove old checkpoints - Uses
a combination of JSON metadata and pickle files to store checkpoint
data
import os
import subprocess
import time
import threading
import queue
import logging
from typing import Dict, Any, Optional, List
from state_manager import StateManager
from memory_manager import MemoryManager
from checkpoint_manager import CheckpointManager
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("/Users/Shared/ai_models/logs/controll
logging.StreamHandler()
]
)
class ModelController:
"""Controller for managing AI models"""
def __init__(self):
"""Initialize model controller"""
self.logger = logging.getLogger("ModelController")
# Initialize managers
self.state_manager = StateManager()
self.memory_manager = MemoryManager()
self.checkpoint_manager = CheckpointManager()
# Model scripts
self.model_scripts = {
"rush": "/Users/Shared/ai_models/rush/rush_main.py",
"nami": "/Users/Shared/ai_models/nami/nami_main.py",
"vex": "/Users/Shared/ai_models/vex/vex_main.py"
}
# Model processes
self.model_processes = {}
# Command queue
self.command_queue = queue.Queue()
script_path = self.model_scripts[model_name]
if not os.path.exists(script_path):
self.logger.error(f"Script not found: {script_path}")
return False
try:
# Start the model process
process = subprocess.Popen(
["python3", script_path],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
universal_newlines=True
)
self.model_processes[model_name] = process
self.logger.info(f"Started model: {model_name}")
return True
except Exception as e:
self.logger.error(f"Error starting model {model_name}:
return False
process = self.model_processes[model_name]
try:
# Terminate the process
process.terminate()
process.wait(timeout=5)
Args:
model_name: Name of the model to get status for, or No
Returns:
Dictionary of model status information
"""
if model_name:
# Get status for a specific model
state = self.state_manager.get_state(model_name)
return {
"model": model_name,
"active": state.active,
"running": running,
"current_task": state.current_task,
"task_progress": state.task_progress,
"checkpoint_path": state.checkpoint_path,
"last_updated": state.last_updated
}
else:
# Get status for all models
statuses = {}
return statuses
def queue_command(self, command: str):
"""Queue a command for processing"""
self.command_queue.put(command)
def _process_commands(self):
"""Process commands from the queue"""
while True:
try:
# Get command from queue
command = self.command_queue.get()
# Process command
self.handle_command(command)
Commands:
- "rush i am home": Switch from RUSH to NAMI
- "return to rush": Switch back to RUSH
- "start vex": Switch to VEX
"""
command = command.lower().strip()
try:
if "rush" in command and "home" in command:
# Switch from RUSH to NAMI
self.logger.info("Command: Switch from RUSH to NAM
# Pause RUSH
self.pause_model("rush")
# Activate NAMI
self.activate_model("nami")
return True
if active_model:
# Pause active model
self.pause_model(active_model)
# Activate RUSH
self.activate_model("rush")
return True
if active_model:
# Pause active model
self.pause_model(active_model)
# Activate VEX
self.activate_model("vex")
return True
else:
self.logger.warning(f"Unknown command: {command}")
return False
except Exception as e:
self.logger.error(f"Error handling command '{command}'
return False
5. Startup Script
#!/bin/bash
# Set up environment
export PYTHONPATH="/Users/Shared/ai_models:$PYTHONPATH"
cd /Users/Shared/ai_models
# Start controller
echo "Starting controller..."
python3 -m shared.controller > logs/controller.log 2>&1 &
# Wait for controller to start
sleep 2
6. Cleanup Script
#!/bin/bash
# Set up environment
cd /Users/Shared/ai_models
Features
Implementation
import os
import sys
import time
import logging
import threading
import json
import subprocess
from typing import Dict, Any, List, Optional
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("/Users/Shared/ai_models/logs/nami.log
logging.StreamHandler()
]
)
class NamiAssistant:
"""NAMI System Control Assistant"""
def __init__(self):
"""Initialize NAMI assistant"""
self.logger = logging.getLogger("NAMI")
# Create directories
os.makedirs("/Users/Shared/ai_models/nami/cache", exist_ok
os.makedirs("/Users/Shared/ai_models/nami/downloads", exis
# Current state
self.current_directory = os.path.expanduser("~")
self.authenticated_users = set()
self.current_command = None
self.command_result = None
def load_state(self):
"""Load state from checkpoint if available"""
try:
# Get state
state = self.state_manager.get_state("nami")
if state.checkpoint_path:
self.logger.info(f"Loading state from checkpoint:
# Load checkpoint
checkpoint_id = os.path.basename(state.checkpoint_
metadata, checkpoint_data = self.checkpoint_manage
# Restore state
self.current_directory = checkpoint_data.get("curr
self.authenticated_users = set(checkpoint_data.get
self.current_command = checkpoint_data.get("curren
self.command_result = checkpoint_data.get("command
def save_state(self):
"""Save current state to checkpoint"""
try:
# Create checkpoint data
checkpoint_data = {
"current_directory": self.current_directory,
"authenticated_users": list(self.authenticated_use
"current_command": self.current_command,
"command_result": self.command_result
}
# Save checkpoint
task_name = "file_command" if self.current_command els
checkpoint_id, checkpoint_path = self.checkpoint_manag
"nami", task_name, checkpoint_data
)
# Update state
progress = 100 if self.command_result is not None else
self.state_manager.update_task_progress("nami", task_n
# Update state
self.current_command = command
self.command_result = None
if os.path.isdir(path):
result = "\n".join(os.listdir(path))
self.command_result = f"Contents of {path}:\n{
else:
self.command_result = f"Error: {path} is not a
elif command.startswith("/cd"):
# Change directory
path = command[3:].strip() or os.path.expanduser("
path = os.path.expanduser(path)
path = os.path.abspath(path)
if os.path.isdir(path):
self.current_directory = path
self.command_result = f"Changed directory to {
else:
self.command_result = f"Error: {path} is not a
elif command.startswith("/find"):
# Find files
query = command[5:].strip()
if query:
result = subprocess.run(
["find", self.current_directory, "-name",
capture_output=True,
text=True
)
if result.stdout:
self.command_result = f"Found files matchi
else:
self.command_result = f"No files found mat
else:
self.command_result = "Error: No search query
elif command.startswith("/exec"):
# Execute command
cmd = command[5:].strip()
if cmd:
# Check if command is allowed
if self._is_command_allowed(cmd):
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
cwd=self.current_directory
)
elif command.startswith("/status"):
# Get system status
cpu_result = subprocess.run(["top", "-l", "1", "-n
disk_result = subprocess.run(["df", "-h"], capture
self.command_result = f"System Status:\n\nCPU/Memo
elif command.startswith("/return_to_rush"):
# Return to RUSH
self.command_result = "Switching back to RUSH..."
elif command.startswith("/start_vex"):
# Start VEX
self.command_result = "Starting VEX..."
else:
self.command_result = f"Unknown command: {command}
# Save state
self.save_state()
return True
if auth_code == valid_code:
self.authenticated_users.add(user_id)
self.save_state()
return True
return False
def run(self):
"""Run the NAMI assistant"""
self.logger.info("Starting NAMI assistant")
# Activate model
self.state_manager.activate_model("nami")
# Main loop
try:
while True:
# In a real implementation, this would handle Tele
# and other interactions
# For this example, we'll just sleep
time.sleep(1)
except KeyboardInterrupt:
self.logger.info("Received interrupt, shutting down")
# Save state
self.save_state()
if __name__ == "__main__":
nami = NamiAssistant()
nami.run()
What this code does: - Initializes the NAMI assistant with necessary
directories and state management - Implements file system
commands like listing directories, changing directories, and finding
files - Provides system status monitoring - Includes security features
to prevent dangerous commands - Handles authentication for remote
access - Integrates with the model controller to switch between
assistants
Setup Instructions
Features
Implementation
import os
import sys
import time
import logging
import threading
import json
import pickle
from typing import Dict, Any, List, Optional
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("/Users/Shared/ai_models/logs/rush.log
logging.StreamHandler()
]
)
class RushAssistant:
"""RUSH Interactive Lecture Assistant"""
def __init__(self):
"""Initialize RUSH assistant"""
self.logger = logging.getLogger("RUSH")
# Create directories
os.makedirs("/Users/Shared/ai_models/rush/audio", exist_ok
os.makedirs("/Users/Shared/ai_models/rush/transcripts", ex
if state.checkpoint_path:
self.logger.info(f"Loading state from checkpoint:
# Load checkpoint
checkpoint_id = os.path.basename(state.checkpoint_
metadata, checkpoint_data = self.checkpoint_manage
# Restore state
self.current_audio = checkpoint_data.get("current_
self.current_transcript = checkpoint_data.get("cur
self.current_analysis = checkpoint_data.get("curre
self.current_questions = checkpoint_data.get("curr
self.processing_stage = checkpoint_data.get("proce
def save_state(self):
"""Save current state to checkpoint"""
try:
# Create checkpoint data
checkpoint_data = {
"current_audio": self.current_audio,
"current_transcript": self.current_transcript,
"current_analysis": self.current_analysis,
"current_questions": self.current_questions,
"processing_stage": self.processing_stage
}
# Save checkpoint
task_name = "process_audio" if self.current_audio else
checkpoint_id, checkpoint_path = self.checkpoint_manag
"rush", task_name, checkpoint_data
)
# Update state
progress = 0
if self.processing_stage == "transcribing":
progress = 25
elif self.processing_stage == "analyzing":
progress = 50
elif self.processing_stage == "generating_questions":
progress = 75
elif self.processing_stage == "complete":
progress = 100
self.state_manager.update_task_progress("rush", task_n
# Update state
self.current_audio = audio_file
self.current_transcript = None
self.current_analysis = None
self.current_questions = None
self.processing_stage = "starting"
def _process_audio_thread(self):
"""Background thread for processing audio"""
try:
# Transcribe audio
self.processing_stage = "transcribing"
self.save_state()
if self.should_stop.is_set():
return
self.current_transcript = self._transcribe_audio(self.
# Analyze transcript
self.processing_stage = "analyzing"
self.save_state()
if self.should_stop.is_set():
return
self.current_analysis = self._analyze_transcript(self.
# Generate questions
self.processing_stage = "generating_questions"
self.save_state()
if self.should_stop.is_set():
return
self.current_questions = self._generate_questions(self
# Complete
self.processing_stage = "complete"
self.save_state()
questions = [
"What is the significance of concept1 in this context?
"How does concept2 relate to concept3?",
"Can you explain fact1 in more detail?",
"What are the implications of fact2?",
"How might these concepts apply in different scenarios
]
def pause_processing(self):
"""Pause current processing"""
self.logger.info("Pausing processing")
# Save state
self.save_state()
self.logger.info("Processing paused")
def resume_processing(self):
"""Resume processing from last checkpoint"""
self.logger.info("Resuming processing")
# Load state
self.load_state()
def run(self):
"""Run the RUSH assistant"""
self.logger.info("Starting RUSH assistant")
# Activate model
self.state_manager.activate_model("rush")
# Main loop
try:
while True:
# In a real implementation, this would handle Tele
# and other interactions
# Pause processing
self.pause_processing()
# Save state
self.save_state()
if __name__ == "__main__":
rush = RushAssistant()
rush.run()
Setup Instructions
Features
Implementation
import os
import sys
import time
import logging
import threading
import json
import subprocess
from typing import Dict, Any, List, Optional
import datetime
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("/Users/Shared/ai_models/logs/vex.log"
logging.StreamHandler()
]
)
class VexEditor:
"""VEX Automated Vlog Editor"""
def __init__(self):
"""Initialize VEX editor"""
self.logger = logging.getLogger("VEX")
# Create directories
os.makedirs("/Users/Shared/ai_models/vex/input", exist_ok=
os.makedirs("/Users/Shared/ai_models/vex/output", exist_ok
os.makedirs("/Users/Shared/ai_models/vex/output/clips", ex
os.makedirs("/Users/Shared/ai_models/vex/temp", exist_ok=T
# Current state
self.current_video = None
self.processing_stage = None
self.processing_progress = 0
self.output_path = None
self.clips = []
def load_state(self):
"""Load state from checkpoint if available"""
try:
# Get state
state = self.state_manager.get_state("vex")
if state.checkpoint_path:
self.logger.info(f"Loading state from checkpoint:
# Load checkpoint
checkpoint_id = os.path.basename(state.checkpoint_
metadata, checkpoint_data = self.checkpoint_manage
# Restore state
self.current_video = checkpoint_data.get("current_
self.processing_stage = checkpoint_data.get("proce
self.processing_progress = checkpoint_data.get("pr
self.output_path = checkpoint_data.get("output_pat
self.clips = checkpoint_data.get("clips", [])
def save_state(self):
"""Save current state to checkpoint"""
try:
# Create checkpoint data
checkpoint_data = {
"current_video": self.current_video,
"processing_stage": self.processing_stage,
"processing_progress": self.processing_progress,
"output_path": self.output_path,
"clips": self.clips
}
# Save checkpoint
task_name = "process_video" if self.current_video else
checkpoint_id, checkpoint_path = self.checkpoint_manag
"vex", task_name, checkpoint_data
)
# Update state
self.state_manager.update_task_progress("vex", task_na
# Update state
self.current_video = video_file
self.processing_stage = "starting"
self.processing_progress = 0
self.output_path = None
self.clips = []
def _process_video_thread(self):
"""Background thread for processing video"""
try:
# Analyze video
self.processing_stage = "analyzing"
self.processing_progress = 10
self.save_state()
if self.should_stop.is_set():
return
# Edit video
self.processing_stage = "editing"
self.processing_progress = 30
self.save_state()
if self.should_stop.is_set():
return
# Add memes
self.processing_stage = "adding_memes"
self.processing_progress = 50
self.save_state()
if self.should_stop.is_set():
return
# Create clips
self.processing_stage = "creating_clips"
self.processing_progress = 70
self.save_state()
if self.should_stop.is_set():
return
# Export
self.processing_stage = "exporting"
self.processing_progress = 90
self.save_state()
if self.should_stop.is_set():
return
# Simulate exporting
time.sleep(2)
# Complete
self.processing_stage = "complete"
self.processing_progress = 100
self.save_state()
def pause_processing(self):
"""Pause current processing"""
self.logger.info("Pausing processing")
# Save state
self.save_state()
self.logger.info("Processing paused")
def resume_processing(self):
"""Resume processing from last checkpoint"""
self.logger.info("Resuming processing")
# Load state
self.load_state()
if video_files:
self.logger.info(f"Found {len(video_files)} video file
return video_files
else:
return []
def run(self):
"""Run the VEX editor"""
self.logger.info("Starting VEX editor")
# Activate model
self.state_manager.activate_model("vex")
# Main loop
try:
while True:
# If we're not currently processing a video
if not self.processing_thread or not self.processi
# Check if it's nighttime or if we're in devel
if self.is_nighttime() or os.environ.get("VEX_
# Check for new videos
videos = self.check_for_new_videos()
if videos:
# Process the first video
self.process_video(videos[0])
# Pause processing
self.pause_processing()
# Save state
self.save_state()
if __name__ == "__main__":
vex = VexEditor()
vex.run()
What this code does: - Initializes the VEX editor with necessary
directories and state management - Implements a multi-stage video
processing pipeline: 1. Analyzing video content to identify interesting
segments 2. Editing the video with transitions and effects 3. Adding
memes and other enhancements 4. Creating short-form clips for
social media 5. Exporting the final videos - Provides pause/resume
functionality to allow processing to be interrupted and continued later
- Includes screen viewing capabilities through video analysis -
Automatically runs during nighttime hours (12:00 AM to 6:30 AM) -
Monitors an input directory for new videos to process
Screen Viewing Capabilities
Setup Instructions
Pause/Resume Workflow
The pause/resume workflow allows efficient switching between
assistants while preserving their state.
How It Works
1. State Management:
2. Each assistant maintains a state file that records its current
progress
3. When switching assistants, the current state is saved
automatically
5. Memory Management:
8. Checkpoint System:
Implementation
Troubleshooting
This section covers common issues you might encounter with the AI
assistants and how to resolve them.
System Won't Start
Memory Issues
Memory Management
Scheduled Maintenance
1. Add to crontab:
0 0 * * 0 /Users/Shared/ai_models/
scheduled_maintenance.sh
Future Expansion
The system is designed to be expandable. Here are some ideas for
future enhancements:
Concurrency Testing
The checkpoint system was tested with corrupted and missing files to
verify recovery capabilities:
Performance Benchmarks
Recommendations
Remember that this system only costs you electricity, as all software
components are free and open-source.