-
Notifications
You must be signed in to change notification settings - Fork 77
Feat/devtest solution #73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Feat/devtest solution #73
Conversation
AI Detection Analysis 🔍Confidence Score: 30% Reasoning: This pull request reflects a high degree of domain understanding, integration between multiple components (Docker, FastAPI, SQLAlchemy, Alembic, ML model training), and thoughtful commentary and structure that suggests a careful and iterative manual development process. The author has included contextually accurate Spanish in comments/documentation, proper test coverage with specific edge and domain validation cases, and optional ML considerations. All of these demonstrate a style of software design and development that is more indicative of a human author with engineering experience, rather than an AI-generated submission. Key Indicators:
In summary, the pull request appears to originate from a developer with strong backend/ML understanding, and while an AI could have contributed to specific file generation (e.g., notebook boilerplate), the overall integration and precise alignment with domain reasonable behavior strongly suggest human authorship. ✅ No strong indicators of AI generation detected |
Solution by Martin Saieh
⚙️ How to Run the Project
1. Clone the repository
2. Set up your environment
Make sure you have Docker and Docker Compose installed.
3. Build and start the backend and database
The API will be available at:
http://localhost:8000/docs
4. Run Alembic migrations
Generate and apply migrations:
./makemigrations.sh "feat: initial migration"5. (Optional) Generate artificial data
Inside the container, run:
docker compose exec web python3 -m app.ml.fake_data6. Run automated tests
🤖 Data Model Overview
id,elevator_id,floor,destination_floor,timestamp_calledid,elevator_id,floor,resting_start,resting_endAll main schema definitions are in
app/db/models.pyandapp/schemas/.🔬 ML Development (Optional)
If you want to experiment with the ML training pipeline outside of Docker, it's recommended to create a Python virtual environment.
1. Create and activate a virtual environment
🧠 ML Training Workflow
EDA & Aggregation:
Use Jupyter notebooks to aggregate hourly demand patterns.
Feature Engineering:
Features include:
Label Calculation:
For each hour, compute the "best resting floor" that minimizes expected distance to future demand.
Training:
Trained with a RandomForest model (
scikit-learn).Output:
Model saved as "/ml/models/best_resting_floor_model.joblib". Data exported as CSV.
🚀 API Endpoints
POST /demands/→ Create a demand entryPOST /resting_periods/→ Create a resting period entryGET /demands/,GET /resting_periods/→ List allPOST /predict_resting_floor/→ Get optimal floor prediction from trained model📦 Requirements
Files:
requirements.txtrequirements-ml.txt💡 Optimization Approach
I define the "best floor" to rest as the one that minimizes:
The model learns the probability distribution of demand conditioned on time and past activity.