【转】Python中的GIL、多进程和多线程
<ul>
<li>
[1. GIL(Global Interpretor Lock,全局解释器锁)](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-1)
</li>
<li>
[2. threading](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-2) <ul>
<li>
[2.1. 创建线程](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-2-1)
</li>
<li>
[2.2. 使用线程队列](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-2-2)
</li>
</ul>
</li>
<li>
[3. dummy_threading(threading的备用方案)](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-3)
</li>
<li>
[4. thread](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-4)
</li>
<li>
[5. dummy_thread(thead的备用方案)](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-5)
</li>
<li>
[6. multiprocessing(基于thread接口的多进程)](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-6) <ul>
<li>
[6.1. Process类](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-6-1)
</li>
<li>
[6.2. 进程间通信](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-6-2)
</li>
<li>
[6.3. 同步](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-6-3)
</li>
<li>
[6.4. 共享状态](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-6-4)
</li>
<li>
[6.5. Pool类](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-6-5)
</li>
</ul>
</li>
<li>
[7. multiprocessing.dummy](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-7)
</li>
<li>
[8. 后记](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-8)
</li>
<li>
[9. 资源](http://lesliezhu.github.io/public/2015-04-20-python-multi-process-thread.html#sec-9)
</li>
</ul>
<p>
see:
</p>
<blockquote>
<ul class="org-ul">
<li>
[http://www.jeffknupp.com/blog/2012/03/31/pythons-hardest-problem/](http://www.jeffknupp.com/blog/2012/03/31/pythons-hardest-problem/)
</li>
<li>
[http://www.oschina.net/translate/pythons-hardest-problem](http://www.oschina.net/translate/pythons-hardest-problem)
</li>
<li>
[https://news.ycombinator.com/item?id=5815567](https://news.ycombinator.com/item?id=5815567)
</li>
<li>
[http://www.dabeaz.com/GIL/](http://www.dabeaz.com/GIL/)
</li>
</ul>
</blockquote>
<blockquote>
<p>
如果其他条件不变,Python程序的执行速度直接与解释器的“速度”相关。不管你怎样优化自己的程序,你的程序的执行速度还是依赖于解释器执行你的程序的效率。
</p>
</blockquote>
<blockquote>
<p>
目前来说,多线程执行还是利用多核系统最常用的方式。尽管多线程编程大大好于“顺序”编程,不过即便是仔细的程序员也没法在代码中将并发性做到最好。
</p>
</blockquote>
<blockquote>
<p>
对于任何Python程序,不管有多少的处理器,任何时候都总是只有一个线程在执行。
</p>
</blockquote>
<blockquote>
<p>
事实上,这个问题被问得如此频繁以至于Python的专家们精心制作了一个标准答案:”不要使用多线程,请使用多进程。“但这个答案比那个问题更加让人困惑。
</p>
</blockquote>
<blockquote>
<p>
GIL对诸如当前线程状态和为垃圾回收而用的堆分配对象这样的东西的访问提供着保护。然而,这对Python语言来说没什么特殊的,它需要使用一个GIL。这是该实现的一种典型产物。现在也有其它的Python解释器(和编译器)并不使用GIL。虽然,对于CPython来说,自其出现以来已经有很多不使用GIL的解释器。
</p>
</blockquote>
<blockquote>
<p>
不管某一个人对Python的GIL感觉如何,它仍然是Python语言里最困难的技术挑战。想要理解它的实现需要对操作系统设计、多线程编程、C语言、解释器设计和CPython解释器的实现有着非常彻底的理解。单是这些所需准备的就妨碍了很多开发者去更彻底的研究GIL。
</p>
</blockquote>
<p>
<code>threading模块提供比/基于<code>thread模块更高层次的接口;如果此模块由于<code>thread丢失而无法使用,可以使用<code>dummy_threading来代替。
</p>
<blockquote>
<p>
CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.
</p>
</blockquote>
<p>
举例:
</p>
<blockquote>
```import threading, zipfile
class AsyncZip(threading.Thread): def init(self, infile, outfile): threading.Thread.init(self) self.infile = infile self.outfile = outfile def run(self): f = zipfile.ZipFile(self.outfile, ‘w’, zipfile.ZIP_DEFLATED) f.write(self.infile) f.close() print ‘Finished background zip of: ‘, self.infile
background = AsyncZip(‘mydata.txt’, ‘myarchive.zip’) background.start() print ‘The main program continues to run in foreground.’
background.join() # Wait for the background task to finish print ‘Main program waited until background was done.’
</blockquote>
<h3 id="sec-2-1">
<span class="section-number-3">2.1创建线程</span>
</h3>
<blockquote>
```import threading
import datetime
class ThreadClass(threading.Thread):
def run(self):
now = datetime.datetime.now()
print "%s says Hello World at time: %s" % (self.getName(), now)
for i in range(2):
t = ThreadClass()
t.start()
</blockquote>
<h3 id="sec-2-2">
<span class="section-number-3">2.2使用线程队列</span>
</h3>
<blockquote>
```import Queue
import threading import urllib2 import time from BeautifulSoup import BeautifulSoup
hosts = [“http://yahoo.com”, “http://google.com”, “http://amazon.com”, “http://ibm.com”, “http://apple.com”]
queue = Queue.Queue() out_queue = Queue.Queue()
class ThreadUrl(threading.Thread): “““Threaded Url Grab””” def init(self, queue, out_queue): threading.Thread.init(self) self.queue = queue self.out_queue = out_queue
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
#grabs urls of hosts and then grabs chunk of webpage
url = urllib2.urlopen(host)
chunk = url.read()
#place chunk into out queue
self.out_queue.put(chunk)
#signals to queue job is done
self.queue.task_done()
class DatamineThread(threading.Thread): “““Threaded Url Grab””” def init(self, out_queue): threading.Thread.init(self) self.out_queue = out_queue
def run(self):
while True:
#grabs host from queue
chunk = self.out_queue.get()
#parse the chunk
soup = BeautifulSoup(chunk)
print soup.findAll(['title'])
#signals to queue job is done
self.out_queue.task_done()
start = time.time() def main():
#spawn a pool of threads, and pass them queue instance
for i in range(5):
t = ThreadUrl(queue, out_queue)
t.setDaemon(True)
t.start()
#populate queue with data
for host in hosts:
queue.put(host)
for i in range(5):
dt = DatamineThread(out_queue)
dt.setDaemon(True)
dt.start()
#wait on the queue until everything has been processed
queue.join()
out_queue.join()
main() print “Elapsed Time: %s” % (time.time() - start)
</blockquote>
<h2 id="sec-3">
<span class="section-number-2">3dummy_threading(threading的备用方案)</span>
</h2>
<p>
<code>dummy_threading模块提供完全复制了threading模块的接口,如果无法使用thread,则可以用这个模块替代.
</p>
<p>
使用方法:
</p>
<blockquote>
```try:
import threading as _threading
except ImportError:
import dummy_threading as _threading
</blockquote>
<p>
在Python3中叫<code>_thread,应该尽量使用<code>threading模块替代。
</p>
<p>
<code>dummy_thread模块提供完全复制了thread模块的接口,如果无法使用thread,则可以用这个模块替代.
</p>
<p>
在Python3中叫<code>_dummy_thread, 使用方法:
</p>
<blockquote>
```try:
import thread as _thread
except ImportError: import dummy_thread as _thread
</blockquote>
<p>
最好使用<code>dummy_threading来代替.
</p>
<h2 id="sec-6">
<span class="section-number-2">6multiprocessing(基于thread接口的多进程)</span>
</h2>
<p>
see:
</p>
<blockquote>
<ul class="org-ul">
<li>
[https://docs.python.org/2/library/multiprocessing.html](https://docs.python.org/2/library/multiprocessing.html)
</li>
</ul>
</blockquote>
<p>
使用<code>multiprocessing模块创建子进程而不是线程来克服GIL引起的问题.
</p>
<p>
举例:
</p>
<blockquote>
```python
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
</blockquote>
<h3 id="sec-6-1">
<span class="section-number-3">6.1Process类</span>
</h3>
<p>
创建进程是使用Process类:
</p>
<blockquote>
```from multiprocessing import Process
def f(name): print ‘hello’, name
if name == ‘main': p = Process(target=f, args=(‘bob’,)) p.start() p.join()
</blockquote>
<h3 id="sec-6-2">
<span class="section-number-3">6.2进程间通信</span>
</h3>
<p>
<code>Queue方式:
</p>
<blockquote>
```from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get() # prints "[42, None, 'hello']"
p.join()
</blockquote>
<p>
<code>Pipe方式:
</p>
<blockquote>
```from multiprocessing import Process, Pipe
def f(conn): conn.send([42, None, ‘hello’]) conn.close()
if name == ‘main': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() # prints “[42, None, ‘hello’]”
</blockquote>
<h3 id="sec-6-3">
<span class="section-number-3">6.3同步</span>
</h3>
<p>
添加锁:
</p>
<blockquote>
```from multiprocessing import Process, Lock
def f(l, i):
l.acquire()
print 'hello world', i
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
</blockquote>
<h3 id="sec-6-4">
<span class="section-number-3">6.4共享状态</span>
</h3>
<p>
应该尽量避免共享状态.
</p>
<p>
共享内存方式:
</p>
<blockquote>
```from multiprocessing import Process, Value, Array
def f(n, a): n.value = 3.1415927 for i in range(len(a)): a[i] = -a[i]
if name == ‘main': num = Value(’d’, 0.0) arr = Array(‘i’, range(10))
p = Process(target=f, args=(num, arr))
p.start()
p.join()
print num.value
print arr[:]
</blockquote>
<p>
Server进程方式:
</p>
<blockquote>
```from multiprocessing import Process, Manager
def f(d, l):
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
if __name__ == '__main__':
manager = Manager()
d = manager.dict()
l = manager.list(range(10))
p = Process(target=f, args=(d, l))
p.start()
p.join()
print d
print l
</blockquote>
<p>
第二种方式支持更多的数据类型,如list, dict, Namespace, Lock, RLock, Semaphore, BoundedSemaphore, Condition, Event, Queue, Value ,Array.
</p>
<h3 id="sec-6-5">
<span class="section-number-3">6.5Pool类</span>
</h3>
<p>
通过Pool类可以建立进程池:
</p>
<blockquote>
```from multiprocessing import Pool
def f(x): return x*x
if name == ‘main': pool = Pool(processes=4) # start 4 worker processes result = pool.apply_async(f, [10]) # evaluate “f(10)” asynchronously print result.get(timeout=1) # prints “100” unless your computer is very slow print pool.map(f, range(10)) # prints “[0, 1, 4,…, 81]”
</blockquote>
<h2 id="sec-7">
<span class="section-number-2">7multiprocessing.dummy</span>
</h2>
<p>
在官方文档只有一句话:
</p>
<blockquote>
<p>
multiprocessing.dummy replicates the API of multiprocessing but is no more than a wrapper around the threading module.
</p>
</blockquote>
<ul class="org-ul">
<li>
<code>multiprocessing.dummy是 multiprocessing 模块的完整克隆,唯一的不同在于 multiprocessing 作用于进程,而 dummy 模块作用于线程;
</li>
<li>
可以针对 IO 密集型任务和 CPU 密集型任务来选择不同的库.<code>IO 密集型任务选择multiprocessing.dummy,CPU 密集型任务选择multiprocessing.
</li>
</ul>
<p>
举例:
</p>
<blockquote>
```import urllib2
from multiprocessing.dummy import Pool as ThreadPool
urls = [
'http://www.python.org',
'http://www.python.org/about/',
'http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html',
'http://www.python.org/doc/',
'http://www.python.org/download/',
'http://www.python.org/getit/',
'http://www.python.org/community/',
'https://wiki.python.org/moin/',
'http://planet.python.org/',
'https://wiki.python.org/moin/LocalUserGroups',
'http://www.python.org/psf/',
'http://docs.python.org/devguide/',
'http://www.python.org/community/awards/'
# etc..
]
# Make the Pool of workers
pool = ThreadPool(4)
# Open the urls in their own threads
# and return the results
results = pool.map(urllib2.urlopen, urls)
#close the pool and wait for the work to finish
pool.close()
pool.join()
results = []
for url in urls:
result = urllib2.urlopen(url)
results.append(result)
</blockquote>
<blockquote>
<ul class="org-ul">
<li>
如果选择多线程,则应该尽量使用<code>threading模块,同时注意GIL的影响
</li>
<li>
如果多线程没有必要,则使用多进程模块<code>multiprocessing,此模块也通过<code>multiprocessing.dummy支持多线程.
</li>
<li>
分析具体任务是I/O密集型,还是CPU密集型
</li>
</ul>
</blockquote>
<blockquote>
<ul class="org-ul">
<li>
[https://docs.python.org/2/library/threading.html](https://docs.python.org/2/library/threading.html)
</li>
<li>
[https://docs.python.org/2/library/thread.html#module-thread](https://docs.python.org/2/library/thread.html#module-thread)
</li>
<li>
[http://segmentfault.com/a/1190000000414339](http://segmentfault.com/a/1190000000414339)
</li>
<li>
[http://www.oschina.net/translate/pythons-hardest-problem](http://www.oschina.net/translate/pythons-hardest-problem)
</li>
<li>
[http://www.w3cschool.cc/python/python-multithreading.html](http://www.w3cschool.cc/python/python-multithreading.html)
</li>
<li>
[Python threads: communication and stopping](http://eli.thegreenplace.net/2011/12/27/python-threads-communication-and-stopping/)
</li>
<li>
[Python - parallelizing CPU-bound tasks with multiprocessing](http://eli.thegreenplace.net/2012/01/16/python-parallelizing-cpu-bound-tasks-with-multiprocessing/)
</li>
<li>
[Python Multithreading Tutorial: Concurrency and Parallelism](http://www.toptal.com/python/beginners-guide-to-concurrency-and-parallelism-in-python)
</li>
<li>
[An introduction to parallel programming–using Python’s multiprocessing module](http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html)
</li>
<li>
[multiprocessing Basics](http://pymotw.com/2/multiprocessing/basics.html)
</li>
<li>
[Python多进程模块Multiprocessing介绍](http://cloga.info/python/2014/01/12/PythonMultiprocessingintro/)
</li>
<li>
[Multiprocessing vs Threading Python](http://stackoverflow.com/questions/3044580/multiprocessing-vs-threading-python)
</li>
<li>
[Parallelism in one line–A Better Model for Day to Day Threading Tasks](http://chriskiehl.com/article/parallelism-in-one-line/)
</li>
<li>
[一行 Python 实现并行化 – 日常多线程操作的新思路](http://segmentfault.com/a/1190000000414339)
</li>
<li>
[使用 Python 进行线程编程](https://www.ibm.com/developerworks/cn/aix/library/au-threadingpython/)
</li>
</ul>
</blockquote>
- 原文作者:大鱼
- 原文链接:https://brucedone.com/archives/86/
- 版权声明:本作品采用知识共享署名-非商业性使用-禁止演绎 4.0 国际许可协议. 进行许可,非商业转载请注明出处(作者,原文链接),商业转载请联系作者获得授权。