Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork33.3k
bpo-40390: Implement channel_send_wait for subinterpreters#19715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Thanks for working on this! Overall the implementation seems good. I've left some comments on things to address.
Regarding the approach to providing a blocking "send()"... After having seen the block-in-the-function approach here, I still have reservations about it. There's a lot of value to keeping the "send" part separate from the "wait" part.
Here are some alternatives that offer that separation:
- return an acquired
threading.Lockwhich the receiving interpreter releases - return a
wait(timeout=None)function which the receiving interpreter will un-block - caller passes in an acquired lock that the receiving interpreter releases
- caller passes in a callback function that the receiving interpreter triggers (via
_Py_AddPendingCall())
My preference is that first one (return a lock). Since we are crossing the interpreter boundary we need to be extra careful. A user-defined callback is too risky. Of the remaining options the first one is simplest and most consistent conceptually.
FWIW, I expect that most of the code in this PR should still work mostly as-is.
| Return a new object from the data at the from of the channel's queue."); | ||
| staticPyObject* | ||
| channel_send_wait(PyObject*self,PyObject*args,PyObject*kwds) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Please move this to right after_channel_send() (right beforechannel_recv()).
Also, how ischannel_send(cid, obj) any different thanchannel_send_wait(cid, obj, 0)? It might make sense to have them be the same function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
At least the function name passed toPyArg_ParseTupleAndKeywords for error messages would be different?
| void*wait_lock=_channelitem_wait_lock_new(); | ||
| if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) { | ||
| returnNULL; | ||
| } | ||
| long longmicroseconds; | ||
| if (timeout >=0) { | ||
| microseconds=timeout*1000000; | ||
| } | ||
| else { | ||
| microseconds=-1; | ||
| } | ||
| PyLockStatuslock_rc=_channelitem_wait_lock_wait(wait_lock,microseconds); | ||
| if (lock_rc==PY_LOCK_ACQUIRED) { | ||
| Py_RETURN_TRUE; | ||
| } | ||
| else { | ||
| Py_RETURN_FALSE; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
We've tried to avoid putting lower-level code like this in the actual extension module functions. Either move it up into_channel_send() (my preference) or into a new_channel_send_wait(). Then this code would look more like the following:
| void*wait_lock=_channelitem_wait_lock_new(); | |
| if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) { | |
| returnNULL; | |
| } | |
| long longmicroseconds; | |
| if (timeout >=0) { | |
| microseconds=timeout*1000000; | |
| } | |
| else { | |
| microseconds=-1; | |
| } | |
| PyLockStatuslock_rc=_channelitem_wait_lock_wait(wait_lock,microseconds); | |
| if (lock_rc==PY_LOCK_ACQUIRED) { | |
| Py_RETURN_TRUE; | |
| } | |
| else { | |
| Py_RETURN_FALSE; | |
| } | |
| long longmicroseconds=timeout*1000000; | |
| if (_channel_send(&_globals.channels,cid,obj,timeout)!=0) { | |
| if (PyErr_Occurred()) { | |
| returnNULL; | |
| } | |
| Py_RETURN_FALSE; | |
| } | |
| Py_RETURN_TRUE; |
| returnNULL; | ||
| } | ||
| void*wait_lock=_channelitem_wait_lock_new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Whyvoid * instead of_channelitem_wait_lock *?
| if (wait_lock!=NULL) { | ||
| _channelitem_wait_lock_sent(item->wait_lock); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why do this here? I'd expect it to happen right next to where_channelitem_wait_lock_() was called.
| _channelitem*last=item; | ||
| item=item->next; | ||
| if (last->wait_lock!=NULL) { | ||
| _channelitem_wait_lock_recv(last->wait_lock,0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Presumably this is so that the sending interpreter stops blocking when the channel is closed.
Does this causeChannelEmptyError orChannelClosedError to get raised in those interpreters? Do we have tests for that?
What about in thechannel_close(cid, send=True) case? I would expect no failure in that case. Are there tests for that?
| void*wait_lock=_channelitem_wait_lock_new(); | ||
| if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) { | ||
| returnNULL; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Iftimeout is 0 then isn't this the same behavior aschannel_send()? In that case you can add a short-circuit:
| void*wait_lock=_channelitem_wait_lock_new(); | |
| if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) { | |
| returnNULL; | |
| } | |
| if (timeout==0) { | |
| if (_channel_send(&_globals.channels,cid,obj,NULL)!=0) { | |
| returnNULL; | |
| } | |
| Py_RETURN_NONE; | |
| } | |
| void*wait_lock=_channelitem_wait_lock_new(); | |
| if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) { | |
| returnNULL; | |
| } |
| import math | ||
| start = time.time() | ||
| rc = _interpreters.channel_send_wait({cid}, b"send", timeout=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
It would be worth making it clear (e.g. in a comment) that we expect this to time out (since it is happening in the same thread where we expectrecv() to be called later).
| import _xxsubinterpreters as _interpreters | ||
| import time | ||
| rc = _interpreters.channel_send_wait({cid}, b"send", timeout=10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The test function says "no_timeout". Did you mean "with_timeout"? "timeout_not_hit"?
| obj=interpreters.channel_recv(cid) | ||
| self.assertEqual(obj,b"send") | ||
| t.join() | ||
| assertthread_excisNone,f"{thread_exc}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
| assertthread_excisNone,f"{thread_exc}" | |
| self.assertIsNone(thread_exc,f"{thread_exc}") |
| obj=interpreters.channel_recv(cid) | ||
| self.assertEqual(obj,b"send") | ||
| t.join() | ||
| assertthread_excisNone,f"{thread_exc}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
| assertthread_excisNone,f"{thread_exc}" | |
| self.assertIsNone(thread_exc,f"{thread_exc}") |
bedevere-bot commentedApr 28, 2020
A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated. Once you have made the requested changes, please leave a comment on this pull request containing the phrase |
FYI, there is a decent chance that we will make |
Uh oh!
There was an error while loading.Please reload this page.
https://bugs.python.org/issue40390