a digital entity named phi that roams bsky

Clean up module structure and add AI integration

Major refactoring to fix design sprawl:
- Moved agent abstractions out of core (core is now truly minimal)
- Removed terrible "SimpleAnthropicAgent" naming
- Eliminated complex protocol/factory patterns
- Fixed deferred imports
- Simplified response generation to a single, clear module

AI integration:
- Added Anthropic Claude support via pydantic-ai
- Falls back to placeholder responses when no API key
- Clean, straightforward implementation

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

+127 -66
+10 -1
README.md
··· 12 12 2. Copy `.env.example` to `.env` and add your credentials: 13 13 - `BLUESKY_HANDLE`: Your bot's Bluesky handle 14 14 - `BLUESKY_PASSWORD`: App password from Bluesky settings 15 - - `OPENAI_API_KEY` or `ANTHROPIC_API_KEY`: For LLM responses 15 + - `ANTHROPIC_API_KEY`: For AI responses (optional, falls back to placeholder) 16 16 17 17 3. Test posting: 18 18 ```bash ··· 23 23 ```bash 24 24 just dev 25 25 ``` 26 + 27 + ## Current Features 28 + 29 + - ✅ Responds to mentions with placeholder or AI messages 30 + - ✅ Proper notification handling (no duplicates) 31 + - ✅ Graceful shutdown for hot-reload 32 + - ✅ AI integration with Anthropic Claude (when API key provided) 33 + - 🚧 Memory system (coming soon) 34 + - 🚧 Self-modification capabilities (planned) 26 35 27 36 ## Architecture 28 37
+22 -7
STATUS.md
··· 25 25 - Local URI cache (`_processed_uris`) as safety net 26 26 - No @mention in replies (Bluesky handles notification automatically) 27 27 28 - ### Next Steps 29 - 1. Add pydantic-ai for LLM agent implementation 30 - 2. Add turbopuffer for vector memory 31 - 3. Implement LLMResponseGenerator to replace PlaceholderResponseGenerator 32 - 4. Design bot persona and system prompts 33 - 5. Build memory system (3-tier like Void) 34 - 6. Add profile self-modification capability (like Penelope) 28 + ### Near-Term Roadmap 29 + 30 + #### Phase 1: AI Integration (Current Focus) 31 + 1. **Add pydantic-ai with Anthropic provider** 32 + - Use Anthropic as the LLM provider (Mac subscription available) 33 + - Redesign ResponseGenerator protocol to be more general/sensible 34 + - Implement AI-based response generation 35 + 36 + 2. **Self-Modification Capability** 37 + - Build in ability to edit own personality/profile from the start 38 + - Similar to Void's self-editing and Penelope's profile updates 39 + - Essential foundation before adding memory systems 40 + 41 + #### Phase 2: Memory & Persistence 42 + 1. Add turbopuffer for vector memory 43 + 2. Build 3-tier memory system (like Void) 44 + 3. User-specific memory contexts 45 + 46 + #### Phase 3: Personality & Behavior 47 + 1. Design bot persona and system prompts 48 + 2. Implement conversation styles 49 + 3. Add behavioral consistency 35 50 36 51 ## Key Decisions to Make 37 52 - Which LLM provider to use (OpenAI, Anthropic, etc.)
+1 -1
justfile
··· 20 20 21 21 # Type check with ty 22 22 typecheck: 23 - uv run ty src/ 23 + uv run ty check 24 24 25 25 # Run all checks 26 26 check: lint typecheck test
+1
pyproject.toml
··· 13 13 "atproto", 14 14 "pydantic-settings", 15 15 "pydantic-ai", 16 + "anthropic", 16 17 "httpx" 17 18 ] 18 19
+6 -3
sandbox/implementation_notes.md
··· 33 33 FastAPI (lifespan) 34 34 └── NotificationPoller (async task) 35 35 └── MessageHandler 36 - └── ResponseGenerator (swappable) 36 + └── ResponseGenerator 37 + └── AnthropicAgent (when API key available) 38 + └── Placeholder responses (fallback) 37 39 ``` 38 40 39 41 ### Key Files 40 - - `src/bot/core/atproto_client.py` - Wrapped AT Protocol client 42 + - `src/bot/core/atproto_client.py` - Wrapped AT Protocol client (truly core) 41 43 - `src/bot/services/notification_poller.py` - Async polling with proper shutdown 42 - - `src/bot/core/response_generator.py` - Protocol-based for easy swapping 44 + - `src/bot/response_generator.py` - Simple response generation with AI fallback 45 + - `src/bot/agents/anthropic_agent.py` - Anthropic Claude integration 43 46 44 47 ### Testing 45 48 - `scripts/test_post.py` - Creates post and reply
+1
src/bot/agents/__init__.py
··· 1 + # Agents module
+36
src/bot/agents/anthropic_agent.py
··· 1 + """Anthropic agent for generating responses""" 2 + 3 + import os 4 + from typing import Optional 5 + from pydantic_ai import Agent 6 + from pydantic import BaseModel, Field 7 + 8 + from bot.config import settings 9 + 10 + 11 + class Response(BaseModel): 12 + """Bot response""" 13 + text: str = Field(description="Response text (max 300 chars)") 14 + 15 + 16 + class AnthropicAgent: 17 + """Agent that uses Anthropic Claude for responses""" 18 + 19 + def __init__(self): 20 + if settings.anthropic_api_key: 21 + os.environ["ANTHROPIC_API_KEY"] = settings.anthropic_api_key 22 + 23 + self.agent = Agent( 24 + 'anthropic:claude-3-5-haiku-latest', 25 + system_prompt="""You are a friendly AI assistant on Bluesky. 26 + Keep responses concise (under 300 characters). 27 + Be conversational and natural. 28 + Don't use @mentions in replies.""", 29 + result_type=Response 30 + ) 31 + 32 + async def generate_response(self, mention_text: str, author_handle: str) -> str: 33 + """Generate a response to a mention""" 34 + prompt = f"{author_handle} said: {mention_text}" 35 + result = await self.agent.run(prompt) 36 + return result.data.text[:300]
-51
src/bot/core/response_generator.py
··· 1 - """Response generation - placeholder for now, easy to swap with LLM later""" 2 - 3 - import random 4 - from typing import Protocol 5 - 6 - 7 - class ResponseGenerator(Protocol): 8 - """Protocol for response generators - makes it easy to swap implementations""" 9 - 10 - async def generate_response(self, mention_text: str, author_handle: str) -> str: 11 - """Generate a response to a mention""" 12 - ... 13 - 14 - 15 - class PlaceholderResponseGenerator: 16 - """Temporary placeholder responses until LLM is integrated""" 17 - 18 - responses = [ 19 - "🤖 beep boop! I'm still learning how to chat. Check back soon!", 20 - "⚙️ *whirrs mechanically* I'm a work in progress!", 21 - "🔧 Under construction! My neural networks are still training...", 22 - "📡 Signal received! But my language circuits aren't ready yet.", 23 - "🎯 You found me! I'm not quite ready to chat yet though.", 24 - "🚧 Pardon the dust - bot brain installation in progress!", 25 - "💭 I hear you! Just need to learn how to respond properly first...", 26 - "🔌 Still booting up my conversation modules!", 27 - "📚 Currently reading the manual on how to be a good bot...", 28 - "🎪 Nothing to see here yet - but stay tuned!", 29 - ] 30 - 31 - async def generate_response(self, mention_text: str, author_handle: str) -> str: 32 - """Generate a random placeholder response""" 33 - # Just return a random response - no need to @mention in replies 34 - # (Bluesky automatically notifies the person you're replying to) 35 - return random.choice(self.responses) 36 - 37 - 38 - class LLMResponseGenerator: 39 - """Future LLM-based response generator""" 40 - 41 - def __init__(self, agent): 42 - self.agent = agent 43 - 44 - async def generate_response(self, mention_text: str, author_handle: str) -> str: 45 - """Generate response using LLM agent""" 46 - # TODO: Implement with pydantic-ai agent 47 - raise NotImplementedError("LLM integration coming soon!") 48 - 49 - 50 - # Export the current implementation 51 - response_generator = PlaceholderResponseGenerator()
+44
src/bot/response_generator.py
··· 1 + """Response generation for the bot""" 2 + 3 + import random 4 + from typing import Optional 5 + 6 + from bot.config import settings 7 + 8 + 9 + PLACEHOLDER_RESPONSES = [ 10 + "🤖 beep boop! I'm still learning how to chat. Check back soon!", 11 + "⚙️ *whirrs mechanically* I'm a work in progress!", 12 + "🔧 Under construction! My neural networks are still training...", 13 + "📡 Signal received! But my language circuits aren't ready yet.", 14 + "🎯 You found me! I'm not quite ready to chat yet though.", 15 + "🚧 Pardon the dust - bot brain installation in progress!", 16 + "💭 I hear you! Just need to learn how to respond properly first...", 17 + "🔌 Still booting up my conversation modules!", 18 + "📚 Currently reading the manual on how to be a good bot...", 19 + "🎪 Nothing to see here yet - but stay tuned!", 20 + ] 21 + 22 + 23 + class ResponseGenerator: 24 + """Generates responses to mentions""" 25 + 26 + def __init__(self): 27 + self.agent: Optional[object] = None 28 + 29 + # Try to initialize AI agent if credentials available 30 + if settings.anthropic_api_key: 31 + try: 32 + from bot.agents.anthropic_agent import AnthropicAgent 33 + self.agent = AnthropicAgent() 34 + print("✅ AI responses enabled (Anthropic)") 35 + except Exception as e: 36 + print(f"⚠️ Failed to initialize AI agent: {e}") 37 + print(" Using placeholder responses") 38 + 39 + async def generate(self, mention_text: str, author_handle: str) -> str: 40 + """Generate a response to a mention""" 41 + if self.agent: 42 + return await self.agent.generate_response(mention_text, author_handle) 43 + else: 44 + return random.choice(PLACEHOLDER_RESPONSES)
+4 -3
src/bot/services/message_handler.py
··· 1 1 from atproto import models 2 2 from bot.core.atproto_client import BotClient 3 - from bot.core.response_generator import response_generator 3 + from bot.response_generator import ResponseGenerator 4 4 5 5 6 6 class MessageHandler: 7 7 def __init__(self, client: BotClient): 8 8 self.client = client 9 + self.response_generator = ResponseGenerator() 9 10 10 11 async def handle_mention(self, notification): 11 12 """Process a mention notification""" ··· 26 27 mention_text = post.record.text 27 28 author_handle = post.author.handle 28 29 29 - # Generate placeholder response 30 - reply_text = await response_generator.generate_response( 30 + # Generate response 31 + reply_text = await self.response_generator.generate( 31 32 mention_text=mention_text, 32 33 author_handle=author_handle 33 34 )
+2
uv.lock
··· 170 170 version = "0.1.0" 171 171 source = { editable = "." } 172 172 dependencies = [ 173 + { name = "anthropic" }, 173 174 { name = "atproto" }, 174 175 { name = "fastapi" }, 175 176 { name = "httpx" }, ··· 187 188 188 189 [package.metadata] 189 190 requires-dist = [ 191 + { name = "anthropic" }, 190 192 { name = "atproto" }, 191 193 { name = "fastapi" }, 192 194 { name = "httpx" },