Conversation
| for _, w := range workers { | ||
| status, err := w.Status() | ||
| if err != nil { | ||
| log.Warning.Println("error getting worker status:", err) |
There was a problem hiding this comment.
How much more info would you like in this error message? The blank status? or the worker perhaps?
| // Collect status of each comp on each worker | ||
| compMap := getCurrentInstanceState(scaler.workers) | ||
|
|
||
| log.Info.Println("----------------------------") |
There was a problem hiding this comment.
It looks pretty for testing
| User: activeComp.User, | ||
| Repo: activeComp.Repo, | ||
| Hash: correctHash, | ||
| Hash: correctHash.hash, |
| return errors.New("Something weird happened.") | ||
| } | ||
|
|
||
| func (mgr *ActionManager) findWorkerToDeployTo(compID worker.ComponentID) (*worker.V9Worker, error) { |
There was a problem hiding this comment.
Doesn't this function already exist in this file?
There was a problem hiding this comment.
This function ensures that the worker it returns doesn't have the component running on it.
Others
left a comment
There was a problem hiding this comment.
Much closer! I think the only big problem remaining is the ensureNWorkersAreRunning function
| workerIDs := make([]string, len(scaler.workers)) | ||
| for i := range scaler.workers { | ||
| name := fmt.Sprintf("worker_%d", i) | ||
| id, err := scaler.driver.FindWorkerID(name) | ||
| if err != nil { | ||
| log.Error.Println("error getting worker id:", err) | ||
| continue | ||
| } | ||
| workerIDs[i] = id | ||
| } |
There was a problem hiding this comment.
We now calculate this "worker name" info in a bunch of places. We should pull it out to a helper for sure
| compMap[cID].averageStats.Hits += componentStats.Hits | ||
| compMap[cID].averageStats.Hits /= float64(compMap[cID].instanceCount) |
There was a problem hiding this comment.
Won't this divide the number every time you increment it?
There was a problem hiding this comment.
If the component is already in the map we need to update the average. So this is like a rolling average.
| dirtyStateNotifier: dirtyStateNotifier, | ||
| } | ||
|
|
||
| //Thread for handling hash changes |
There was a problem hiding this comment.
space before the beginning of comment
There was a problem hiding this comment.
Wait do we want a space or not?
|
|
||
| mgr.NotifyComponentStateChanged() | ||
| } | ||
| }() |
There was a problem hiding this comment.
add comment: should we be batching hash updates in one lock?
There was a problem hiding this comment.
I was thinking this as well. The question is like, do we want a thread for each channel. One thread will be updated by the autoscaler periodically. The other will be changing the hash as needed.
No description provided.