Skip to content
This repository was archived by the owner on Nov 6, 2020. It is now read-only.

Add conditional loops #20

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

dbrindzx
Copy link

@dbrindzx dbrindzx commented Feb 8, 2017

Change-Id: Ic827808e336c78aa8b097db7030a89570e00488a
Signed-off-by: Dominik Brindza [email protected]


This change is Reviewable

KEEP_LOOPING = None

def __init__(self, end_fail=False, end_success=True, keep_looping=None):
super(CondLoopBase, self).__init__()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be compatible with Python 2.x?

self.end_conds_succ.append(cond)


class CondLoopImpl(CondLoopBase):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are times that a lookahead iterator is useful, could we add something for this to these?

def peek(self):
  <implementation>

self.gen_iter = self.loops_gen()
self.last_loop_res = None

def _before_next(self):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we only envision a single "before" or "after" function? Based on further classes, some before and after functions are small and boilerplate.

Perhaps maintaining a list of functions and calling each in order would be useful.

if cloop.loops_counter:
do_sleep = True
with suppress(AttributeError):
do_sleep = cloop.loops_counter < cloop._count_max
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use getattr with default values to avoid the Attribute exception and result in the desired default value?

if getattr(cloop, 'loops_counter', 0) < getattr(cloop, '_count_max', float('inf')):
  time.sleep(cloop._interval)

or

counter, max = counts = getattr(cloop, 'loops_counter', None), getattr(cloop, '_count_max', None)
if None in counts or counter < max:
  time.sleep(cloop._interval)


@classmethod
def add_interval(cls, cloop, interval, *args, **kwargs):
cloop._interval = interval
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please convert interval to an int.

"""
OPTION = 'interval'

@classmethod
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a class method?

cloop._deadline = cloop._start_time + timeout

def end_cond():
if not(int(time.time()) < cloop._deadline):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing an expression and then returning literal Boolean values.

cl_value = kwargs.pop(cl_option)
cl_type.add(self, cl_value)

if kwargs:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to do this by set operations and detect the error before populating the OPTIONS, lest we pollute it.

def __init__(self, **kwargs):
  super().__init__()
  options_key_set = set(self.OPTIONS)
  kwargs_key_set = set(kwargs)
  bad_kwargs_keys = kwargs_key_set.difference(options_key_set)
  if bad_kwargs_keys:
    raise CondLoopException(...)
  for key, value in kwargs.items():
    self.OPTIONS[key].add(self, value)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes perfect sense not to populate OPTIONS before its been validated, thanks.

Do we drop the OrderedDict for the sets and with it any means of control over the ordering in which the options are processed? Might not have much impact now for the current default options already there, but for potential future extensions, for the sake of keeping it generic?

"""
@property
def _fail_iter(self):
return (True for cond in self.end_conds_fail if cond())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect this to be:

return (bool(cond()) for cond in self.end_cond_fail)

so that each condition generates a result and then we can use any or all instead of list and thereby stop when the first "bad" condition is found, instead of trudging on through the rest.

And similar for _succ_iter below.


def _met_all(self):
if list(self._fail_iter):
return self.ON_FAIL_FALSE
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this undefined for the GenericLoop class? I'd expect this to be a pylint or pep8 error. Can we do better? Define as abstract or give dummy values?

@esmacgil
Copy link

Reviewed 2 of 2 files at r1.
Review status: all files reviewed at latest revision, 16 unresolved discussions, some commit checks failed.


Comments from Reviewable

@esmacgil
Copy link

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 71 at r1 (raw file):

    """
    """
    OPTIONS = OrderedDict()

I see how this seems appropriate as a class attribute, but doesn't that then dictate that we make separate singleton classes for each different combination of options that we will be using?

If we continue in this manner/style, where is the enforcement of the singleton nature?

Further, how do we prevent one user from modifying the class attribute thereby affecting all users?

On the other hand, if this were an instance attribute, then we wouldn't need separate classes for each variant and combination of options and different users could adjust their instances without affecting other users. Though I would appreciate a freeze command that would lock the options and prevent all future normal/simple modifications to the options.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, dbrindzx wrote…

Makes perfect sense not to populate OPTIONS before its been validated, thanks.

Do we drop the OrderedDict for the sets and with it any means of control over the ordering in which the options are processed? Might not have much impact now for the current default options already there, but for potential future extensions, for the sake of keeping it generic?

I don't think we lose anything from the OPTIONS regarding ordering, since we are calling add on the values within the OrderedDict. Granted we are calling add on a random order of these values, but the ordering of the keys of OPTIONS is unaffected by this action.

The ordering is determined by/affected when register_option is called on a given class.


Comments from Reviewable

@esmacgil
Copy link

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 117 at r1 (raw file):

Previously, dbrindzx wrote…

Maybe you meant dropwhile instead of takewhile.

Im not sure what the second argument to the next should be, if any? Might as well just error out since that case of exhausting the iterator after the last element was still self.keep_looping "should never happen"?

Yes, dropwhile is what we want here, sorry for the error.

Erring out is not what the initial implementation does. Instead, it returns self.keep_looking since that will be the value of status should self.__next__() raise StopIteration before another status is reached. This determined why my suggestion give self.keep_looking as the default to the next call.

If we want it to error, then omit the default value.


Comments from Reviewable

@dbrindzx
Copy link
Author

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, esmacgil (Edward MacGillivray) wrote…

I don't think we lose anything from the OPTIONS regarding ordering, since we are calling add on the values within the OrderedDict. Granted we are calling add on a random order of these values, but the ordering of the keys of OPTIONS is unaffected by this action.

The ordering is determined by/affected when register_option is called on a given class.

Yeah, thats kinda the point, to match the order of options kwargs processed in init with the calls to register_option. Hence looping over the OPTIONS OrderedDict instead of the kwargs regular unordered one. I dont suppose achieving the same is quite possible via sets? They do however filter out the invalid options rather smoothly.

What I could really use here is a routine that would create an OrderedDict (sub)dict instance from an ordinary dict instance, based on some ordering criteria (e.g. another OrderedDict instance) and perhaps even partition off the difference. Havent quite figured out just yet how to do that in Py, efficiently too.


Comments from Reviewable

@esmacgil
Copy link

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, dbrindzx wrote…

Yeah, thats kinda the point, to match the order of options kwargs processed in init with the calls to register_option. Hence looping over the OPTIONS OrderedDict instead of the kwargs regular unordered one. I dont suppose achieving the same is quite possible via sets? They do however filter out the invalid options rather smoothly.

What I could really use here is a routine that would create an OrderedDict (sub)dict instance from an ordinary dict instance, based on some ordering criteria (e.g. another OrderedDict instance) and perhaps even partition off the difference. Havent quite figured out just yet how to do that in Py, efficiently too.

Are you expecting that the same value will be stored in OPTIONS under two different keys?

Which is to say that the order of OPTIONS will not be impacted, but the order of calling add can be affected which may impact the ordering of end_conds_fail in the case of CondLoopTimeout.

If this is what we are expecting to happen, then go ahead and maintain the initial population, just put the difference test before it.


Comments from Reviewable

@dbrindzx
Copy link
Author

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, esmacgil (Edward MacGillivray) wrote…

Are you expecting that the same value will be stored in OPTIONS under two different keys?

Which is to say that the order of OPTIONS will not be impacted, but the order of calling add can be affected which may impact the ordering of end_conds_fail in the case of CondLoopTimeout.

If this is what we are expecting to happen, then go ahead and maintain the initial population, just put the difference test before it.

Yeah the whole point of forcing OrderedDict on the OPTIONS is to store the ordering of types in which register_option's were called, so that this particular order will then also denote one in which the kwargs are to be processed. Which is to inherently maintain the same ordering for the calls to add, which is going to directly impact the the ordering in end_conds_fail.

So that the kwargs dict is processed in a predictable order. Result of which, e.g. the "counter" watchdog will always halt before the "timeout" one should both fire in the same (lazy) loop. Due to the:

DEFAULT_TYPES = [CondLoopInterval, CondLoopCounter, CondLoopTimeout]

Note: WIP, hence completely arbitrary ordering for the time being, subject to change easily however seen fit. Albeit fixed once set.

I dont see how the uniqueness property of the OPTIONS' values is of any relevance here though. But no, I dont aim to force compliance with any properties for the values.


Comments from Reviewable

@rbbratta
Copy link
Member

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, dbrindzx wrote…

Yeah the whole point of forcing OrderedDict on the OPTIONS is to store the ordering of types in which register_option's were called, so that this particular order will then also denote one in which the kwargs are to be processed. Which is to inherently maintain the same ordering for the calls to add, which is going to directly impact the the ordering in end_conds_fail.

So that the kwargs dict is processed in a predictable order. Result of which, e.g. the "counter" watchdog will always halt before the "timeout" one should both fire in the same (lazy) loop. Due to the:

DEFAULT_TYPES = [CondLoopInterval, CondLoopCounter, CondLoopTimeout]

Note: WIP, hence completely arbitrary ordering for the time being, subject to change easily however seen fit. Albeit fixed once set.

I dont see how the uniqueness property of the OPTIONS' values is of any relevance here though. But no, I dont aim to force compliance with any properties for the values.

Review scrub: needs review


Comments from Reviewable

@rbbratta
Copy link
Member

Why do we need this?


Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@dbrindzx
Copy link
Author

Please, consider the following samples from taf/testlib/helpers.py:

end_time = time.time() + timeout while True: time.sleep(0.33) if switch.nb.RouteInterface.get.operationalStatus(iface_id).lower() == status.lower(): break else: if time.time() > end_time: raise ...

end_time = time.time() + timeout iter_counter = 1 while True: if time.time() < end_time: _port_row = switch.ui.get_table_ports([port]) _status = _port_row[0]['operationalStatus'] if _status == status: mod_logger.debug("Function wait_for_port_status finished.") break iter_counter += 1 time.sleep(0.3) else: mod_logger.debug("Function wait_for_port_status finished.") raise ...

if type(switch_instance).__name__.find("Switch") >= 0: switch_instance = switch_instance.xmlproxy end_time = time.time() + timeout break_flag = False while True: if time.time() < end_time: table = switch_instance.nb.Fdb.getTable() table_length = len(table) if mac_address is not None: for row in table: if row['macAddress'] == mac_address and row['portId'] == port_id and row['vlanId'] == vlan_id: break_flag = True break if break_flag: break else: table_length = len(table) if table_length != 0: break else: raise ..

mac_pool = [] count = 1 while count <= quantity: mac = [0x00, random.randint(0x00, 0xff), random.randint(0x00, 0xff), random.randint(0x00, 0xff), random.randint(0x00, 0xff), random.randint(0x00, 0xff), ] mac_addr = ":".join(["%02x" % x for x in mac]) if mac_addr not in mac_exclusions: mac_pool.append(mac_addr) mac_exclusions.append(mac_addr) count += 1

while True: if time.time() < end_time: table = switch_instance.getprop_table("ARP") mod_logger.debug("ARP table for switch is :%s" % table) for row in table: if row['netAddress'] == net_address and row["ifId"] == if_id: is_entry_added_to_arp_table = True mod_logger.debug("ARP table for switch is :%s" % table) return True # Need to wait until proper row will be added to ARP table. time.sleep(1)

end_time = time.time() + timeout result = False while True: if time.time() < end_time: table = switch_instance.getprop_table("Route") mod_logger.debug("Route table: %s" % table) for row in table: if row['network'] == network_ip and row['nexthop'] == nexthop: result = True break # Need to wait until entry will be added to proper table. time.sleep(1) else: pytest.fail ... if result: break

while True: # Verify that entry is removed. if time.time() > end_time: # Need to wait untill entry will be expired from table. time.sleep(1) table = switch_instance.getprop_table("L2Multicast") mod_logger.debug("%s table for switch is :%s" % (table_name, table)) if table: assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "queryInterval", [1, default_interval]) == 0 assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "querierRobustness", [1, default_robustness]) == 0 pytest.fail("Table %s is not empty." % table_name) else: assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "queryInterval", [1, default_interval]) == 0 assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "querierRobustness", [1, default_robustness]) == 0 return True

Now these are just examples from one file, not all of them either, there is plenty more throughout all taf and testcases, even tempest.
Im not even suggesting the need for refactoring of the existing code, though that certainly would be possible.
The aim is to unify these at least from now on wherever feasible, I myself have identified number of candidates in our venv, magnum and testcases too.
Feels like one of those you keep writing all over again. Hence this idea.

I propose the following:
`
def ping_routine():
with suppress(UICmdException):
vm_src.ui.icmp_ping_request(ip_dst, 4, options=cmd_str)
return True
return False

a_loop = loops.Until(timeout=60, interval=3)
result = a_loop(ping_routine)
if result:
log.debug("Routine success: retried %d times and took %s seconds in total.", a_loop.loops_counter, a_loop.duration)
else:
log.error("Routine failed: timed out after %s", 60)
`


Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@dbrindzx
Copy link
Author

Please, excuse the poor formatting.


Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@dbrindzx
Copy link
Author

      end_time = time.time() + timeout
      while True:
          time.sleep(0.33)
          if switch.nb.RouteInterface.get.operationalStatus(iface_id).lower() == status.lower():
              break
          else:
              if time.time() > end_time:
                  raise ...
          while True:
              if time.time() < end_time:
                  _port_row = switch.ui.get_table_ports([port])
                  _status = _port_row[0]['operationalStatus']
                  if _status == status:
                      mod_logger.debug("Function wait_for_port_status finished.")
                      break
                  iter_counter += 1
                  time.sleep(0.3)
              else:
                  mod_logger.debug("Function wait_for_port_status finished.")
                  raise ...
      while True:
          if time.time() < end_time:
              table = switch_instance.nb.Fdb.getTable()
              table_length = len(table)
              if mac_address is not None:
                  for row in table:
                      if row['macAddress'] == mac_address and row['portId'] == port_id and row['vlanId'] == vlan_id:
                          break_flag = True
                          break
                  if break_flag:
                      break
              else:
                  table_length = len(table)
                  if table_length != 0:
                      break
          else:
              raise ...
      while True:
          if time.time() < end_time:
              if findlist is not None:
                  row_id = switch_instance.findprop(table_name, findlist)
                  while (row_id == -1) and (time.time() < end_time):
                      time.sleep(0.3)
                      waiting_table_is_loaded(switch_instance=switch_instance, table_name=table_name, expected_table_length=1, timeout=60, deviation=1,
                                              direction="+")
                      row_id = switch_instance.findprop(table_name, findlist)
              if type(switch_instance).__name__.find("Switch") >= 0:
                  parameter = switch_instance.getprop(table_name, parameter_name, row_id)
              else:
                  parameter = getattr(switch_instance, "nb.%s.get.%s" % (table_name, parameter_name))(row_id)
              mod_logger.debug("table_name %s row_id %s parameter_name %s = %s; iterated %s times, time to leave = %s" %
                               (table_name, row_id, parameter_name, parameter, iter_counter, (end_time - time.time())))
              if parameter == value:
                  mod_logger.debug("Function wait_until_value_is_changed finished.")
                  break
              iter_counter += 1
              time.sleep(0.3)
          else:
              mod_logger.debug("Function wait_until_value_is_changed finished.")
              raise CustomException(("Timeout exceeded: Parameter %s wasn't changed in %s table " +
                                     "during timeout %s into %s value") %
                                    (parameter_name, table_name, timeout, value))
      while table_length not in list(range(left_margin, right_margin)):
          if time.time() >= end_time:
              mod_logger.error("Timeout exceeded: %s table length %s wasn't changed to %s during %s" %
                               (table_name, table_length, list(range(left_margin, right_margin)), timeout))
              mod_logger.debug("Function 'waiting_table_is_loaded' is finished.")
              return False
          table_length = switch_instance.getprop_size(table_name)
          mod_logger.debug("%s table length = %s, time left %s" % (table_name, table_length, end_time - time.time()))
          if verbose:
              table = switch_instance.getprop_table(table_name)
              mod_logger.debug("%s table content: %s" % (table_name, table))
          mod_logger.debug("Waiting for %s seconds for next iteration" % watch_interval)
          time.sleep(watch_interval)
      mod_logger.debug("Exit parameters: table_name: %s, expected table length range: %s, current table length: %s" %
                       (table_name, list(range(left_margin, right_margin)), table_length))
      mod_logger.debug("Function 'waiting_table_is_loaded' is finished.")
      return True
      mac_pool = []
      count = 1
      while count <= quantity:
          mac = [0x00,
                 random.randint(0x00, 0xff),
                 random.randint(0x00, 0xff),
                 random.randint(0x00, 0xff),
                 random.randint(0x00, 0xff),
                 random.randint(0x00, 0xff), ]
          mac_addr = ":".join(["%02x" % x for x in mac])
          if mac_addr not in mac_exclusions:
              mac_pool.append(mac_addr)
              mac_exclusions.append(mac_addr)
              count += 1
      mod_logger.debug("Generated MAC addresses list %s" % mac_pool)
      mod_logger.debug("Function 'generate_random_mac' is finished.")
      return mac_pool
      while True:
          if time.time() < end_time:
              table = switch_instance.getprop_table("ARP")
              mod_logger.debug("ARP table for switch is :%s" % table)
              for row in table:
                  if row['netAddress'] == net_address and row["ifId"] == if_id:
                      is_entry_added_to_arp_table = True
                      mod_logger.debug("ARP table for switch is :%s" % table)
                      return True
              # Need to wait until proper row will be added to ARP table.
              time.sleep(1)
          elif is_entry_added_to_arp_table == result:
              break
          else:
              mod_logger.debug("ARP table for switch is :%s" % table)
              pytest.fail ...
      end_time = time.time() + timeout
      result = False
      while True:
          if time.time() < end_time:
              table = switch_instance.getprop_table("Route")
              mod_logger.debug("Route table: %s" % table)
              for row in table:
                  if row['network'] == network_ip and row['nexthop'] == nexthop:
                      result = True
                      break
              # Need to wait until entry will be added to proper table.
              time.sleep(1)
          else:
              pytest.fail ...
          if result:
              break
          while True:
              # Verify that entry is removed.
              if time.time() > end_time:
                  # Need to wait untill entry will be expired from table.
                  time.sleep(1)
                  table = switch_instance.getprop_table("L2Multicast")
                  mod_logger.debug("%s table for switch is :%s" % (table_name, table))
                  if table:
                      assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "queryInterval", [1, default_interval]) == 0
                      assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "querierRobustness", [1, default_robustness]) == 0
                      pytest.fail("Table %s is not empty." % table_name)
                  else:
                      assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "queryInterval", [1, default_interval]) == 0
                      assert switch_instance.setprop("IGMPSnoopingGlobalAdmin", "querierRobustness", [1, default_robustness]) == 0
                      return True
                  while True:
                      if time.time() < end_time:
                          # Need to wait untill entry is appeared in Ports table.
                          time.sleep(1)
                          for i in range(1, slave_ports_count):
                              slave_port_result = switch_instance.findprop("Ports", [port + i, ])
                              if slave_port_result > 0:
                                  break
                          if slave_port_result > 0:
                              break
                      else:
                          pytest.fail("Slave ports are not appeared in Ports table during timeout %s seconds" % timeout)
                  for i in range(1, slave_ports_count):
                      assert switch_instance.setprop("Ports", "adminMode", [switch_instance.findprop("Ports", [port + i, ]), "Down"]) == 0

Now these are just examples from one file, not all of them either, there is plenty more throughout all taf and testcases, even tempest.
Im not even suggesting the need for refactoring of the existing code, though that certainly would be possible.
The aim is to unify these at least from now on wherever feasible, I myself have identified number of candidates in our venv, magnum and testcases too.
Feels like one of those you keep writing all over again. Hence this idea.

I propose the following:

def ping_routine():
    with suppress(UICmdException):
        vm_src.ui.icmp_ping_request(ip_dst, 4, options=cmd_str)
        return True
    return False

a_loop = loops.Until(timeout=60, interval=3)
result = a_loop(ping_routine)
if result:
    log.debug("Routine success: retried %d times and took %s seconds in total.", a_loop.loops_counter, a_loop.duration)
else:
    log.error("Routine failed: timed out after %s", 60)

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@esmacgil
Copy link

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, rbbratta (Ross Brattain) wrote…

Review scrub: needs review

Okay, ordering matters. But what if the user wants a different order than this? They have to construct the base Loop and then apply the add methods in the desired order?

I also am feeling that this is too much code to maintain based on the utility that is provides. I like the aim, but I want parts of it to be more useful and then the overall framework seems like it is trying to replace zip.

For instance, in another code base, I made a pair of classes, one inheriting from the other, that iterate over time (TimeIter), with the child class (CountOffTimeIter) adding the notion of counting the intervals of time during the iteration. These also allowed the user to sleep each advancement of the iteration. Then there's the built-in enumerate if you want to keep track of the number of iterations.

So with these you could build an iterator by enumerate(zip(iter(my_test, object()), TimeIter(30, sleep_time=2))) which will run my_test and stop after 30 seconds, sleeping 2 seconds each iteration and giving the current iteration count.

Then there's takewhile and dropwhile from itertools that allow for predicates to react to values coming from the iteration. Taking the above it is possible to:

def my_predicate(count, zipped):
    if count > my_max:
        raise StopIteration()
    test_result, time = zipped
    return not test_result

result = next((test_result_ for count_, (test_result_, time_) in dropwhile(my_predicate, enumerate(zip(iter(my_test, object()), TimeIter(30, sleep_time=2)))), False)

Perhaps there are some additional utilities that we can provide for implementing the predicates, but I prefer the freedom that iterators and the standard library provides.

The things that I think people fail to remember are:

  1. iter(my_func, object()) will give an infinite iteration that calls my_func each iteration
  2. takewhile and dropwhile allow the values generated by an iterator to be tested on each iteration
  3. Any number of iterators can be zipped together into another iteration

Comments from Reviewable

@dbrindzx
Copy link
Author

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, esmacgil (Edward MacGillivray) wrote…

Okay, ordering matters. But what if the user wants a different order than this? They have to construct the base Loop and then apply the add methods in the desired order?

I also am feeling that this is too much code to maintain based on the utility that is provides. I like the aim, but I want parts of it to be more useful and then the overall framework seems like it is trying to replace zip.

For instance, in another code base, I made a pair of classes, one inheriting from the other, that iterate over time (TimeIter), with the child class (CountOffTimeIter) adding the notion of counting the intervals of time during the iteration. These also allowed the user to sleep each advancement of the iteration. Then there's the built-in enumerate if you want to keep track of the number of iterations.

So with these you could build an iterator by enumerate(zip(iter(my_test, object()), TimeIter(30, sleep_time=2))) which will run my_test and stop after 30 seconds, sleeping 2 seconds each iteration and giving the current iteration count.

Then there's takewhile and dropwhile from itertools that allow for predicates to react to values coming from the iteration. Taking the above it is possible to:

def my_predicate(count, zipped):
    if count > my_max:
        raise StopIteration()
    test_result, time = zipped
    return not test_result

result = next((test_result_ for count_, (test_result_, time_) in dropwhile(my_predicate, enumerate(zip(iter(my_test, object()), TimeIter(30, sleep_time=2)))), False)

Perhaps there are some additional utilities that we can provide for implementing the predicates, but I prefer the freedom that iterators and the standard library provides.

The things that I think people fail to remember are:

  1. iter(my_func, object()) will give an infinite iteration that calls my_func each iteration
  2. takewhile and dropwhile allow the values generated by an iterator to be tested on each iteration
  3. Any number of iterators can be zipped together into another iteration

Valid points. If you're ok with the idea as whole and the suggested API, or similar, that would be enough for now. We can sort out the implementation details next. If we dont like the current way it is done (Im not particularly happy with with the callbacks for instance either), it was just a proposed solution. I have no problem dropping it for a cleverer one and Im open for suggestions. In fact, Id be leaning towards employing the standard iterator facilities as much as possible, provided that we can support all the desired features of the API, in particular the modularity - the ability to add/remove/combine different looping techniques and behaviors, as well as extensibility.

Mind sharing the TimeIter then?

Side note to iter : I dont understand/like how it does not accept a general predicate as the optional second argument, like the takewhile / dropwhile do, instead forces the equality one.


Comments from Reviewable

@esmacgil
Copy link

Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


taf/testlib/common/loops.py, line 233 at r1 (raw file):

Previously, dbrindzx wrote…

Valid points. If you're ok with the idea as whole and the suggested API, or similar, that would be enough for now. We can sort out the implementation details next. If we dont like the current way it is done (Im not particularly happy with with the callbacks for instance either), it was just a proposed solution. I have no problem dropping it for a cleverer one and Im open for suggestions. In fact, Id be leaning towards employing the standard iterator facilities as much as possible, provided that we can support all the desired features of the API, in particular the modularity - the ability to add/remove/combine different looping techniques and behaviors, as well as extensibility.

Mind sharing the TimeIter then?

Side note to iter : I dont understand/like how it does not accept a general predicate as the optional second argument, like the takewhile / dropwhile do, instead forces the equality one.

Sure and I agree that having Callables usable more by built-ins would be great, whether it is iter, next or getattr and probably others. Where getattr(x, y, lambda: z + 5) would be the same as:

def getattr(a, b, c):
    with suppress(AttributeError):
        return getattr(x, y)
    with suppress(TypeError):
        return c()
    return c

The main downfall is when you want to return the Callable instead of the result of the Callable, so you'd have to wrap the desired Callable return value within another Callable.

getattr(x, y, lambda: my_callable)


Comments from Reviewable

@esmacgil
Copy link

esmacgil commented Mar 7, 2017

I still don't see the utility that I want to see from such an abstract framework.


Reviewed 2 of 2 files at r2.
Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@dbrindzx
Copy link
Author

dbrindzx commented Mar 7, 2017

Please consider the following example:

def ping_test(vm_src, ip_dst=None, ping_command=None, cond_loop=None):
      """
      """
      if not ping_command:
          ping_command = ping_cmd.CmdPing()
      if not ip_dst:
          ping_command.check_args()
          ip_dst = ping_command['__host']

      cmd_str = ' '.join(ping_cmd.CmdPing(ping_command).unset(__host=None).to_args_list())

      def ping_routine():
          with suppress(UICmdException):
              vm_src.ui.icmp_ping_request(ip_dst, 4, options=cmd_str)
              return True
          return False

Now, passing a valid cond_loop object, it can be invoked to take it from there by simply:

      if cond_loop:
          cond_loop.prepend(Functional.f2i()(Predicate.pwrap_bool_false(ping_routine)))
          return next(cond_loop)

Otherwise, I'll have to simulate the cond_loop functionality. With a little help of the functional/predicate logics helpers I can do this:

      max_loops = 30
      timeout = 60
      start_time = int(time.time())
      end_time = start_time + timeout

      @Functional.f2i(sentinel=False)
      def timeout_iter():
          while int(time.time()) < end_time:
              return True
          return False

      counter_iter = range(max_loops)

      def predicate_passed(result):
          return bool(result[1][0])

      def predicate_failed(result):
          return not bool(result[1][0])

      iterables = [
          Functional.f2i()(ping_routine),
          timeout_iter,
          counter_iter,
      ]
      return bool(next(itertools.dropwhile(predicate_failed, enumerate(zip(*iterables))), False))

Or alternatively:

      iterables = [
          Functional.f2i()(Predicate.pwrap_bool_false(ping_routine)),
          timeout_iter,
          counter_iter,
      ]
      return bool(next(itertools.dropwhile(predicate_passed, enumerate(zip(*iterables))), False))

Which is pretty much roughly equivalent to what the cond_loop does under the hood.

For completeness sake, heres an example of calling the ping_test function:

    ping_command = ping_cmd.CmdPing.from_posargs_optkwargs(vm3_ip, __c=1, __w=30)
    cond_loop = loops.GenericUntil.from_iterables(loops.ICounter(30), loops.ITimeout(60))
    assert ping_test(vm1, ping_command=ping_command, cond_loop=cond_loop)

Do you find this too much or not enough?
Besides the primary goal to aggregate possibly a number of iterators and callable iterators, filter and map
the yielded results there are helpers to decorate existing functions, tweak the result and more.
Pretty much all the features I had originally intended.


Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@esmacgil
Copy link

esmacgil commented Mar 7, 2017

Sorry, I see that it has utility. The question is whether it has enough utility and whether that utility will be utilized. I don't see quite enough utility for me to justify it on that basis and I can't say whether it will be widely used.

Have you discussed this with others and do they think it is something they would use?


Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@dbrindzx
Copy link
Author

dbrindzx commented Mar 9, 2017

Well now is a good time do discuss that and decide on the next steps. Consider this a first working draft, it should outline the general idea nicely. Let me know if you find it useful enough or what you would like to see added, reworked or removed to achieve better utility.
I have added some general helpers to ease up working with iterators/functions/predicates whose
usefulness might not be limited by the scope of this change request, albeit already dependent on it.

So far Im aware of Martin B. who was willing to use this if it had been available, it may have been for the Magnum code that he had employed that gave birth to this very idea in fact. I myself am in the middle of another fix where by employing the helpers I am already avoiding duplicating code. Others may need to be made aware first.


Review status: all files reviewed at latest revision, 17 unresolved discussions, some commit checks failed.


Comments from Reviewable

@rbbratta
Copy link
Member

rbbratta commented Apr 6, 2017

Review scrub: please address comments

@NazarTkachuk
Copy link

TAF doctring style is changed.
Detailed information you can find by link
http://taf-docs.readthedocs.io/en/latest/development_guide.html#docstring

Signed-off-by: Dominik Brindza <[email protected]>
Add linux ping command utility
Add conditinoal loops helpers
Add ping loop to venv instances readiness checking
Add linux generic tool context management
Enhance command helpers
Adapt unittests

Change-Id: Ibea4f7fe4f616d53e40655b84b5203744865885f
Signed-off-by: Dominik Brindza <[email protected]>

Conflicts:
	taf/plugins/pytest_tempest.py
	taf/testlib/custom_exceptions.py
	taf/testlib/dev_iperftg.py
	taf/testlib/dev_linux_host.py
	taf/testlib/dev_linux_host_vm.py
	taf/testlib/helpers.py
	taf/testlib/linux/commands/cmd_helper.py
	taf/testlib/linux/iperf/iperf.py
	taf/testlib/linux/iperf/iperf_cmd.py
	taf/testlib/virtual_env.py
	unittests/linux/test_commands.py
	unittests/linux/test_iperf.py
@dbrindzx dbrindzx force-pushed the conditional_loops branch from e037c21 to 0146fc3 Compare April 24, 2017 12:06
@esmacgil
Copy link

How are these changes related to "Add conditional loops"?


Reviewed 1 of 21 files at r3.
Review status: 4 of 24 files reviewed at latest revision, 17 unresolved discussions.


Comments from Reviewable

@dbrindzx
Copy link
Author

Built on top of them, actually. Admittedly, a separate review with a dependency would have been more suitable, also considering the size of this one already.


Review status: 4 of 24 files reviewed at latest revision, 17 unresolved discussions.


Comments from Reviewable

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants