Skip to content

Possible bug in weighted_fair when resouce is 'bit' #717

@nemethf

Description

@nemethf

I think the current behavior of the wighted_fair scheduler when the resource is set to 'bit' is counter-intuitive. Take a look at the following bess script:

bess.add_tc('root', 'weighted_fair', wid=0, resource='bit')

tagger= SetMetadata(attrs=[{'name': 'tag', 'size': 4, 'value_int': 0}])
split = Split(attribute='tag', size=4)

s::Source() -> tagger -> split

for i in range(2):
  q = Queue()
  name = 't-%s' % i
  bess.add_tc(name=name, parent='root', policy='round_robin', share=1)
  q.attach_task(parent=name)
  split:i -> q -> Sink()

bess.attach_task(s.name, wid=0)

Running the script results in a dead-lock because the split module never sends a packet to the second queue, and therefore the queue never forwards a packet although it always 'runnable'.

+--------+                   +-------------+                   +----------+                   +-----------+             +-------+
|   s    |                   |  setattr0   |                   |  split0  |                   |  queue0   |             | sink0 |
| Source |  :0 81434528 0:   | SetMetadata |  :0 81475136 0:   |  Split   |  :0 81579712 0:   |   Queue   |  :0 32 0:   | Sink  |
|        | ----------------> |             | ----------------> |          | ----------------> | 1023/1024 | ----------> |       |
+--------+                   +-------------+                   +----------+                   +-----------+             +-------+
                                                                 |
                                                                 | :1 0 0:
                                                                 v
                                                               +----------+                   +-----------+
                                                               |  queue1  |                   |   sink1   |
                                                               |  Queue   |  :0 0 0:          |   Sink    |
                                                               |  0/1024  | ----------------> |           |
                                                               +----------+                   +-----------+

<worker 0>
  +-- !default_rr_0            round_robin
      +-- root                 weighted_fair
      |   +-- t-0              round_robin         share: 1
      |   |   +-- !leaf_queue0:0 leaf
      |   +-- t-1              round_robin         share: 1
      |       +-- !leaf_queue1:0 leaf
      +-- !leaf_s:0            leaf

I somehow expected that the scheduler would choose queue0 because queue1 was empty and queue0 would opportunistically work all the time. Now that I understand the current behavior I can live with it, but still wonder whether it is intended. Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions