Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

bpo-40390: Implement channel_send_wait for subinterpreters#19715

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation

@benedwards14
Copy link
Contributor

@benedwards14benedwards14 commentedApr 25, 2020
edited by bedevere-bot
Loading

Copy link
Member

@ericsnowcurrentlyericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Thanks for working on this! Overall the implementation seems good. I've left some comments on things to address.

Regarding the approach to providing a blocking "send()"... After having seen the block-in-the-function approach here, I still have reservations about it. There's a lot of value to keeping the "send" part separate from the "wait" part.

Here are some alternatives that offer that separation:

  • return an acquiredthreading.Lock which the receiving interpreter releases
  • return await(timeout=None) function which the receiving interpreter will un-block
  • caller passes in an acquired lock that the receiving interpreter releases
  • caller passes in a callback function that the receiving interpreter triggers (via_Py_AddPendingCall())

My preference is that first one (return a lock). Since we are crossing the interpreter boundary we need to be extra careful. A user-defined callback is too risky. Of the remaining options the first one is simplest and most consistent conceptually.

FWIW, I expect that most of the code in this PR should still work mostly as-is.

Return a new object from the data at the from of the channel's queue.");

staticPyObject*
channel_send_wait(PyObject*self,PyObject*args,PyObject*kwds)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Please move this to right after_channel_send() (right beforechannel_recv()).

Also, how ischannel_send(cid, obj) any different thanchannel_send_wait(cid, obj, 0)? It might make sense to have them be the same function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

At least the function name passed toPyArg_ParseTupleAndKeywords for error messages would be different?

Comment on lines +2547 to +2566
void*wait_lock=_channelitem_wait_lock_new();
if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) {
returnNULL;
}

long longmicroseconds;
if (timeout >=0) {
microseconds=timeout*1000000;
}
else {
microseconds=-1;
}
PyLockStatuslock_rc=_channelitem_wait_lock_wait(wait_lock,microseconds);

if (lock_rc==PY_LOCK_ACQUIRED) {
Py_RETURN_TRUE;
}
else {
Py_RETURN_FALSE;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

We've tried to avoid putting lower-level code like this in the actual extension module functions. Either move it up into_channel_send() (my preference) or into a new_channel_send_wait(). Then this code would look more like the following:

Suggested change
void*wait_lock=_channelitem_wait_lock_new();
if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) {
returnNULL;
}
long longmicroseconds;
if (timeout >=0) {
microseconds=timeout*1000000;
}
else {
microseconds=-1;
}
PyLockStatuslock_rc=_channelitem_wait_lock_wait(wait_lock,microseconds);
if (lock_rc==PY_LOCK_ACQUIRED) {
Py_RETURN_TRUE;
}
else {
Py_RETURN_FALSE;
}
long longmicroseconds=timeout*1000000;
if (_channel_send(&_globals.channels,cid,obj,timeout)!=0) {
if (PyErr_Occurred()) {
returnNULL;
}
Py_RETURN_FALSE;
}
Py_RETURN_TRUE;

returnNULL;
}

void*wait_lock=_channelitem_wait_lock_new();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Whyvoid * instead of_channelitem_wait_lock *?

Comment on lines +614 to +616
if (wait_lock!=NULL) {
_channelitem_wait_lock_sent(item->wait_lock);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Why do this here? I'd expect it to happen right next to where_channelitem_wait_lock_() was called.

_channelitem*last=item;
item=item->next;
if (last->wait_lock!=NULL) {
_channelitem_wait_lock_recv(last->wait_lock,0);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Presumably this is so that the sending interpreter stops blocking when the channel is closed.

Does this causeChannelEmptyError orChannelClosedError to get raised in those interpreters? Do we have tests for that?

What about in thechannel_close(cid, send=True) case? I would expect no failure in that case. Are there tests for that?

Comment on lines +2547 to +2550
void*wait_lock=_channelitem_wait_lock_new();
if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) {
returnNULL;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Iftimeout is 0 then isn't this the same behavior aschannel_send()? In that case you can add a short-circuit:

Suggested change
void*wait_lock=_channelitem_wait_lock_new();
if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) {
returnNULL;
}
if (timeout==0) {
if (_channel_send(&_globals.channels,cid,obj,NULL)!=0) {
returnNULL;
}
Py_RETURN_NONE;
}
void*wait_lock=_channelitem_wait_lock_new();
if (_channel_send(&_globals.channels,cid,obj,wait_lock)!=0) {
returnNULL;
}

import math
start = time.time()
rc = _interpreters.channel_send_wait({cid}, b"send", timeout=1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

It would be worth making it clear (e.g. in a comment) that we expect this to time out (since it is happening in the same thread where we expectrecv() to be called later).

import _xxsubinterpreters as _interpreters
import time
rc = _interpreters.channel_send_wait({cid}, b"send", timeout=10)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The test function says "no_timeout". Did you mean "with_timeout"? "timeout_not_hit"?

obj=interpreters.channel_recv(cid)
self.assertEqual(obj,b"send")
t.join()
assertthread_excisNone,f"{thread_exc}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Suggested change
assertthread_excisNone,f"{thread_exc}"
self.assertIsNone(thread_exc,f"{thread_exc}")

obj=interpreters.channel_recv(cid)
self.assertEqual(obj,b"send")
t.join()
assertthread_excisNone,f"{thread_exc}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Suggested change
assertthread_excisNone,f"{thread_exc}"
self.assertIsNone(thread_exc,f"{thread_exc}")

@bedevere-bot
Copy link

A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated.

Once you have made the requested changes, please leave a comment on this pull request containing the phraseI have made the requested changes; please review again. I will then notify any core developers who have left a review that you're ready for them to take another look at this pull request.

@ericsnowcurrently
Copy link
Member

FYI, there is a decent chance that we will makethreading.Lock shareable (via channels). So keep that in mind.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@ericsnowcurrentlyericsnowcurrentlyericsnowcurrently requested changes

+1 more reviewer

@LewisGaulLewisGaulLewisGaul left review comments

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

5 participants

@benedwards14@bedevere-bot@ericsnowcurrently@LewisGaul@the-knights-who-say-ni

[8]ページ先頭

©2009-2025 Movatter.jp