Skip to content

Conversation

@kalmik
Copy link

@kalmik kalmik commented Jun 28, 2025

Implementing the NextLevel-Elevator API with a simple and basic implementation focusing on supplying the necessary to store users' demands.

This API has 4 endpoints.

POST /api/v1/elevator that will create a new elevator
PUT /api/v1/elevator/{elevator_id} that initiates a new demand
POST /api/v1/elevator/{elevator_id}/state To react to an elevator state when it reaches a new level
GET /api/v1/elevator/dataset.csv To retrieve the dataset that will be used to train the prediction model

Since the main goal is to provide data to predict the best level for the elevator rest, we basically need to provide data to predict where the next demand will be. Let's work on the demand itself.

I´m considering that the elevator will be controlled by its own control system, and this API is only to react to elevator events, such as a demand or a state update.

And how its going to work.

Whenever a user calls the elevator to a certain level the elevator system will query this api to store the user demand, and it needs to be unique until the elevator attends that demand. for that requiremente lets rely on SQL DB unique contraint and do it atomically waiting and reacting the DB Integrity Error to make sure that will never be a rece condition on that.

class ElevatorDemand(SQLModel, table=True):
    __table_args__ = (
        UniqueConstraint(
            "elevator_id",
            "level",
            name="uniq_elevator_id_timestamp_level"
        ),
    )
    id: int = Field(primary_key=True)
    elevator_id: int = Field(foreign_key="elevator.id")
    timestamp: int = Field()
    level: int = Field()

So The is the basic model of a Elevator Demand where will store the elevator_id, the timestamp and the leval, and each demand will be unique using the (elevator_id, level) columns

The next important rule is how to reacts the elevator state, Whenever a elevator stop in any level it must call the api to store the new state. At this time the backend will query for any demand opened for that level, if it founds it will create a new entry into a new table called elevator_demand_history with the necessary data to be used in training system and it will DELETE the entry that it founds opening a new slot to a new demand to that level all that in the same transaction to archieve consistency.

class ElevatorDemandHistory(SQLModel, table=True):
    id: int = Field(primary_key=True)
    elevator_id: int = Field(foreign_key="elevator.id")
    week_day: int = Field()
    hour: int = Field()
    minute: int = Field()
    second: int = Field()
    level: int = Field()

Storing the demand that was completelly attended by the Elevator and splitting the timestamp into week_day, hour, minute and second for the given demand, so it will be more easy to group demand by any time heuristics with second precision.

This model can be enriched with more data it demends considering the seasonal information like if it is a holyday and something like that but I decide to take it more straight foward

The end user can now query the endpoint to retrieve the dataset in a CSV format with the following info
elevator_id,week_day,hour,minute,second,level

it can use the elevator_id, week_day, hour, minute, second, to be the input of the ML model with any precision of time the users want since, grouping by quarter of a minute, half a minute or whatevet it wants. and use the level to be the output of the ML model

once Trained it can be used by the elevator control system any time the elevator is resting it can run the inference system to discover where will be the next demand.

@github-actions
Copy link

AI Detection Analysis 🔍

Confidence Score: 30%

Reasoning: The content of this pull request strongly resembles the work of a human developer, particularly one with a solid understanding of database schema design, REST API development with FastAPI, and data engineering for machine learning purposes. Though the structure and organization of code are clean and consistent, several elements, including the informal and sometimes unpolished writing style, contextual understanding of elevator systems, and detailed inline explanations, all suggest a human rather than AI-generated authorship. If this were AI-generated, we would expect better grammar and formatting consistency, fewer colloquialisms, and more rigidly organized code comments and documentation.

Key Indicators:

Human Indicators:

  • Informal and non-standard English usage: e.g., “lets rely on SQL DB unique contraint,” “rece condition,” “So The is the basic model…”
  • Spelling and grammar errors: “Intergrity,” “suppoted,” “demands that was completelly attended,” “straight foward,” etc.
  • Narrative thinking: The author discusses reasoning behind implementation choices in a personal and iterative tone.
  • Clear traces of trial-and-error or development pragmatism: e.g., using a startup hook for database creation, a note about this being “only for didatical pourpose.”
  • Varied and nuanced decision-making based on domain knowledge — balancing performance, atomicity, training data format, and timestamp breakdown to aid ML ingestion.

Potential AI-Indicators (minor):

  • Clean API design and modular structure (could suggest AI use but also reflects standard practices).
  • Use of modern frameworks like FastAPI and SQLModel in a best-practice way, which might overlap with AI-generated examples.

Overall, the natural imperfections, thoughtful design decisions, and domain-specific phrasing are more consistent with human authorship.

✅ No strong indicators of AI generation detected

@github-actions
Copy link

AI Detection Analysis 🔍

Confidence Score: 40%

Reasoning:
The pull request appears to be a well-scoped and practical implementation of an elevator demand tracking system using FastAPI and SQLModel, including Docker configuration, unit tests, database modeling, and endpoints for data storage and retrieval. The architectural decisions are sensible, domain-specific, and executed with care. However, there are signs of some technical writing flaws that are characteristic of a non-native English speaker or someone typing conversationally and informally, not typically how AI would present technical documentation.

Additionally, the testing suite and the edge case handling in the implementation (like DB-level uniqueness constraints for race conditions, timestamp segmentation for ML readiness, inference-focused design decisions, SQLModel transactions, test structure using pytest fixtures, exception handling, and API routing) demonstrate a moderately advanced grasp of backend design that suggests human authorship. Some comments, typos (e.g., "rece condition", "whatevet", "demends", "Testsing"), and informal phrasing also support a human origin.

Key Indicators:

  • Human-like typos and informal language such as “rece condition”, “holyday”, and “whatevet” which aren't typical outputs from LLMs.
  • Logical and consistent narrative thread about elevator demand tracking and ML-readiness.
  • Appropriate and idiomatic use of FastAPI-router structure, SQLModel classes, exception handling with SQLAlchemy's IntegrityError.
  • Clearly defined test cases using pytest with specific test data and assertion logic.
  • Domain-driven constraints and heuristic transformations (e.g., timestamp breakdown into hour, minute, second) aligning well with real-world ML preprocessing.

Taken together, the programming and planning complexity as well as the human error patterns lower the confidence that this was AI-generated. Thus, while some technical language could be LLM-assisted, the overall structure and quality suggest primarily human authorship.

✅ No strong indicators of AI generation detected

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant