LANCompute is a distributed computing platform that enables you to leverage all available compute resources across your local area network (LAN). It features specialized support for macOS unified memory architecture and heterogeneous compute environments.
- Automatic Node Discovery: Finds available compute resources on your LAN
- Heterogeneous Computing: Supports mixed architectures (Apple Silicon, x86, ARM)
- Unified Memory Optimization: Special optimizations for Apple Silicon Macs
- Task Distribution: Intelligent task scheduling based on node capabilities
- Platform Detection: Automatic detection of hardware capabilities
- Fault Tolerance: Handles node failures gracefully
- REST API: Simple HTTP API for task submission and monitoring
LANCompute uses a master-worker architecture:
- Master Service: Central coordinator for task distribution and node management
- Worker Services: Execute tasks on compute nodes
- Task Queue: Priority-based queue with requirement matching
- Node Manager: Tracks node health and capabilities
The system includes AI agent personas for different aspects:
distributed-systems-architect: Overall system designtask-scheduler: Task distribution algorithmsnode-manager: Node lifecycle managementprotocol-designer: Communication protocolscompute-optimizer: Performance optimization
python master_service.py --port 8080On each compute node:
python worker_service.py --master-url http://<master-ip>:8080curl -X POST http://localhost:8080/task \
-H "Content-Type: application/json" \
-d '{
"type": "compute",
"payload": {"operation": "matrix_multiply", "size": 1000},
"priority": 10,
"requirements": {"min_memory_gb": 4}
}'On Apple Silicon Macs, LANCompute automatically:
- Detects unified memory architecture
- Optimizes for zero-copy data sharing
- Utilizes performance and efficiency cores
- Supports Metal compute shaders
- Leverages Neural Engine when available
Run the optimizer to see your system capabilities:
python mac_optimizer.pyFor Intel-based systems:
- AVX instruction set utilization
- NUMA awareness
- Discrete GPU support
- Traditional memory hierarchy optimization
GET /status- Service statusGET /tasks- List all tasksGET /nodes- List all nodesGET /task/{id}- Get task detailsPOST /task- Submit new taskPOST /node/register- Register nodePOST /node/heartbeat- Node heartbeatPOST /task/update- Update task status
- compute - General computation tasks
- data_processing - Data transformation tasks
- ml_inference - Machine learning inference
- test - Testing and benchmarking
Edit config.yaml to customize:
- Network settings
- Security options
- Resource limits
- Task priorities
- Platform-specific optimizations
- Python 3.7+
- psutil
- requests
Optional for enhanced features:
- numpy (for numerical computations)
- PyTorch/TensorFlow (for ML tasks)
# Clone the repository
git clone <repository-url>
cd LANCompute
# Install dependencies
pip install psutil requests
# Start services
python master_service.py &
python worker_service.py --master-url http://localhost:8080The project includes comprehensive agent documentation in .claude/agents/ for AI-assisted development. Each agent specializes in different aspects of distributed systems.
Run the network scanner to find LLM services and screen sharing endpoints:
python -m lancompute.network_scannerDiscover macOS screen sharing (VNC) services via mDNS/Bonjour:
python -m lancompute.network_scanner --discoverDiagnose connectivity to a specific host (ping, VNC, ARD, SSH):
python -m lancompute.network_scanner --diagnose <hostname-or-ip>import requests
task = {
"type": "ml_inference",
"payload": {"model": "sentiment_analysis", "text": "Great product!"},
"priority": 50,
"requirements": {
"gpu_available": True,
"min_memory_gb": 8
}
}
response = requests.post("http://localhost:8080/task", json=task)
print(f"Task ID: {response.json()['task_id']}")import requests
# Get all nodes
nodes = requests.get("http://localhost:8080/nodes").json()
for node in nodes['nodes']:
print(f"Node {node['id']}: {node['status']} - "
f"{len(node['current_tasks'])} tasks")
# Get task summary
tasks = requests.get("http://localhost:8080/tasks").json()
status_counts = {}
for task in tasks['tasks']:
status = task['status']
status_counts[status] = status_counts.get(status, 0) + 1
print(f"Task summary: {status_counts}")- Check firewall settings
- Verify master URL is correct
- Ensure master service is running
- Check node capabilities match requirements
- Verify worker has available capacity
- Check logs for errors
- Adjust
max_concurrent_tasksin worker - Tune thread/process pool size
- Check network bandwidth
See agent documentation for architecture guidelines:
.claude/agents/distributed-systems-architect.md.claude/agents/protocol-designer.md.claude/agents/compute-optimizer.md
This project is licensed under the MIT License - see the LICENSE file for details.
- GPU compute support (CUDA, Metal)
- Distributed storage system
- Web dashboard
- Container-based task execution
- Multi-language task support
- Advanced scheduling algorithms