The following module functions all construct and return iterators. Some provide streams of infinite length, so they should only be accessed by functions or loops that truncate the stream.
| *iterables) | 
     def chain(*iterables):
         for it in iterables:
             for element in it:
                 yield element
| [n]) | 
     def count(n=0):
         while True:
             yield n
             n += 1
Note, count() does not check for overflow and will return
  negative numbers after exceeding sys.maxint.  This behavior
  may change in the future.
| iterable) | 
     def cycle(iterable):
         saved = []
         for element in iterable:
             yield element
             saved.append(element)
         while saved:
             for element in saved:
                   yield element
Note, this member of the toolkit may require significant auxiliary storage (depending on the length of the iterable).
| predicate, iterable) | 
     def dropwhile(predicate, iterable):
         iterable = iter(iterable)
         for x in iterable:
             if not predicate(x):
                 yield x
                 break
         for x in iterable:
             yield x
| iterable[, key]) | 
None, key defaults to an
  identity function and returns  the element unchanged.  Generally, the
  iterable needs to already be sorted on the same key function.
The returned group is itself an iterator that shares the underlying iterable with groupby(). Because the source is shared, when the groupby object is advanced, the previous group is no longer visible. So, if that data is needed later, it should be stored as a list:
    groups = []
    uniquekeys = []
    for k, g in groupby(data, keyfunc):
        groups.append(list(g))      # Store group iterator as a list
        uniquekeys.append(k)
groupby() is equivalent to:
    class groupby(object):
        def __init__(self, iterable, key=None):
            if key is None:
                key = lambda x: x
            self.keyfunc = key
            self.it = iter(iterable)
            self.tgtkey = self.currkey = self.currvalue = xrange(0)
        def __iter__(self):
            return self
        def next(self):
            while self.currkey == self.tgtkey:
                self.currvalue = self.it.next() # Exit on StopIteration
                self.currkey = self.keyfunc(self.currvalue)
            self.tgtkey = self.currkey
            return (self.currkey, self._grouper(self.tgtkey))
        def _grouper(self, tgtkey):
            while self.currkey == tgtkey:
                yield self.currvalue
                self.currvalue = self.it.next() # Exit on StopIteration
                self.currkey = self.keyfunc(self.currvalue)
| predicate, iterable) | 
True.
  If predicate is None, return the items that are true.
  Equivalent to:
     def ifilter(predicate, iterable):
         if predicate is None:
             predicate = bool
         for x in iterable:
             if predicate(x):
                 yield x
| predicate, iterable) | 
False.
  If predicate is None, return the items that are false.
  Equivalent to:
     def ifilterfalse(predicate, iterable):
         if predicate is None:
             predicate = bool
         for x in iterable:
             if not predicate(x):
                 yield x
| function, *iterables) | 
None, then
  imap() returns the arguments as a tuple.  Like
  map() but stops when the shortest iterable is exhausted
  instead of filling in None for shorter iterables.  The reason
  for the difference is that infinite iterator arguments are typically
  an error for map() (because the output is fully evaluated)
  but represent a common and useful way of supplying arguments to
  imap().
  Equivalent to:
     def imap(function, *iterables):
         iterables = map(iter, iterables)
         while True:
             args = [i.next() for i in iterables]
             if function is None:
                 yield tuple(args)
             else:
                 yield function(*args)
| iterable, [start,] stop [, step]) | 
None, then iteration continues until
  the iterator is exhausted, if at all; otherwise, it stops at the specified
  position.  Unlike regular slicing,
  islice() does not support negative values for start,
  stop, or step.  Can be used to extract related fields
  from data where the internal structure has been flattened (for
  example, a multi-line report may list a name field on every
  third line).  Equivalent to:
     def islice(iterable, *args):
         s = slice(*args)
         it = iter(xrange(s.start or 0, s.stop or sys.maxint, s.step or 1))
         nexti = it.next()
         for i, element in enumerate(iterable):
             if i == nexti:
                 yield element
                 nexti = it.next()
| *iterables) | 
     def izip(*iterables):
         iterables = map(iter, iterables)
         while iterables:
             result = [it.next() for it in iterables]
             yield tuple(result)
Changed in version 2.4: When no iterables are specified, returns a zero length iterator instead of raising a TypeError exception.
Note, the left-to-right evaluation order of the iterables is guaranteed. This makes possible an idiom for clustering a data series into n-length groups using "izip(*[iter(s)]*n)". For data that doesn't fit n-length groups exactly, the last tuple can be pre-padded with fill values using "izip(*[chain(s, [None]*(n-1))]*n)".
Note, when izip() is used with unequal length inputs, subsequent
  iteration over the longer iterables cannot reliably be continued after
  izip() terminates.  Potentially, up to one entry will be missing
  from each of the left-over iterables. This occurs because a value is fetched
  from each iterator in-turn, but the process ends when one of the iterators
  terminates.  This leaves the last fetched values in limbo (they cannot be
  returned in a final, incomplete tuple and they are cannot be pushed back
  into the iterator for retrieval with it.next()).  In general,
  izip() should only be used with unequal length inputs when you
  don't care about trailing, unmatched values from the longer iterables.
| object[, times]) | 
     def repeat(object, times=None):
         if times is None:
             while True:
                 yield object
         else:
             for i in xrange(times):
                 yield object
| function, iterable) | 
function(a,b) and function(*c).
  Equivalent to:
     def starmap(function, iterable):
         iterable = iter(iterable)
         while True:
             yield function(*iterable.next())
| predicate, iterable) | 
     def takewhile(predicate, iterable):
         for x in iterable:
             if predicate(x):
                 yield x
             else:
                 break
| iterable[, n=2]) | 
n==2 is equivalent to:
     def tee(iterable):
         def gen(next, data={}, cnt=[0]):
             for i in count():
                 if i == cnt[0]:
                     item = data[i] = next()
                     cnt[0] += 1
                 else:
                     item = data.pop(i)
                 yield item
         it = iter(iterable)
         return (gen(it.next), gen(it.next))
Note, once tee() has made a split, the original iterable should not be used anywhere else; otherwise, the iterable could get advanced without the tee objects being informed.
Note, this member of the toolkit may require significant auxiliary storage (depending on how much temporary data needs to be stored). In general, if one iterator is going to use most or all of the data before the other iterator, it is faster to use list() instead of tee(). New in version 2.4.
See About this document... for information on suggesting changes.