parallel processing - Python - multiprocessing unexpected results -


i have code containing iterator, works well:

import multiprocessing  m = [0,1,2,3]   class gener(object):     def __init__(self, m):         self.m = m         self.c = 0      def __iter__(self):         return self      def next(self):         time.sleep(1)         ret = self.m[self.c]         self.c += 1         return ret    tt  = gener(m)  def gen(t):     return t.next()   print gen(tt) print gen(tt) print gen(tt) 

out:

0  1  2 

but if try insert parallel process don't expected results:

import time import multiprocessing  m = [0,1,2,3]   class gener(object):     def __init__(self, m):         self.m = m         self.c = 0      def __iter__(self):         return self      def next(self):         time.sleep(1)         ret = self.m[self.c]         self.c += 1         return ret    tt  = gener(m)  def gen(t):     return t.next()   job1 = multiprocessing.process(target=gen, args=(tt,)) print job1.start()  job2 = multiprocessing.process(target=gen, args=(tt,)) print job2.start()  job3 = multiprocessing.process(target=gen, args=(tt,)) print job3.start() 

out:

<none)> <none)> <none)> 

i can't figure out, how use iterator via parallel. can me? thank you!

update:

following @anand s kumar useful help, updated code, , works fine, except output ambiguous, i'm trying figure out wrong, maybe subject thread, maybe anand me :)):

from threading import thread, lock import time    m = [0,1,2,3] starter = 0  class gener(object):     def __init__(self, m):         self.m = m         self.c = 0      def __iter__(self):         return self      def next(self):         time.sleep(1)         ret = self.m[self.c]         self.c += 1         return ret  tt = gener(m)   def f(t):     global starter     lock = lock()     lock.acquire()     try:         starter = t.next()     finally:         lock.release()    t1 = thread(target=f,args=(tt,)) t1.start()  t2 = thread(target=f,args=(tt,)) t2.start()  t3 = thread(target=f,args=(tt,)) t3.start()  t1.join() print starter t2.join() print starter t3.join() print starter 

different outputs, same code:

0 1 2  2 2 2  0 2 2 

you trying print return value of job.start() function, not return anything, hence prints none .

instead of printing return value of job.start() , maybe can move print statement gen(t) function, -

def gen(t):     print t.next() 

and run program , without printing job.start() .

if want recieve return value function, can use pool multiprocessing module. [documentation]

an example documentation -

from multiprocessing import pool  def f(x):     return x*x  if __name__ == '__main__':     pool = pool(processes=4)              # start 4 worker processes     result = pool.apply_async(f, [10])    # evaluate "f(10)" asynchronously     print result.get(timeout=1)           # prints "100" unless computer *very* slow     print pool.map(f, range(10)) 

but please note, creating multiple processes , not threads, not share global variables.

i believe want threads , maybe example below started -

from threading import thread, lock m = [0,1,2,3] starter = 0  class gener(object):     def __init__(self, m):         self.m = m         self.c = 0      def __iter__(self):         return self      def next(self):         ret = self.m[self.c]         self.c += 1         return ret   tt  = gener(m)   def f(t):     global starter     lock = lock()     lock.acquire()     try:         starter = t.next()     finally:         lock.release()  t1 = thread(target=f,args=(tt,)) t1.start() t2 = thread(target=f,args=(tt,)) t2.start() t1.join() t2.join() 

Comments

Popular posts from this blog

android - Gradle sync Error:Configuration with name 'default' not found -

java - Andrioid studio start fail: Fatal error initializing 'null' -

html - jQuery UI Sortable - Remove placeholder after item is dropped -