Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Add support for tensorflow together with ODL#972

Merged
adler-j merged 70 commits intoodlgroup:masterfromadler-j:tensorflow_support
Aug 29, 2017

Conversation

adler-j
Copy link
Member

TODO: summary

@kohr-h
Copy link
Member

Phew.. My review here against your review on#861? :-P

@adler-j
Copy link
MemberAuthor

adler-j commentedApr 16, 2017
edited
Loading

It's on its way :) But you wait until I rebase from master, this is actually quite a small PR.

@kohr-h
Copy link
Member

Is this ready for review or should I wait?

@adler-j
Copy link
MemberAuthor

Still havn't merged from master, but you can easily get started if you want to.

@adler-jadler-j mentioned this pull requestMay 23, 2017
Copy link
Member

@kohr-hkohr-h left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I guess since this code is battle-tested there's not very much to say about the implementation itself. I just wonder if temporaries would make sense to speed up things, but that might quickly become infeasible. Dunno.

In general I find the wrapping code hard to digest, not because of complexity (it's really not that elaborate) but more because of all the inline function definitions that take up quite some space. If there's a way to make this a bit less nested, I'd appreciate it.

Anyway, docs could use some cleanup. Other than that it looks good (nice detective work on the ninja-style workarounds).



What is vectorization?
======================
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Has this stuff moved here from some other file?

In any case, can you convert it to the "one sentence - one line" RST style?

============

Python functions are in most cases used as input to a discretization process. For example, we may
want to discretize a two-dimensional Gaussian function:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Gaussian function ::

@@ -0,0 +1,38 @@
"""Example of how to convert an ODL operator to a tensorflow layer."""

import tensorflow as tf
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Are we sure thatodl.contrib.tensorflow is never confused withtensorflow (e.g.odl.contrib.tensorflow.tensorflow)?

Iftensorflow wasn't already so long I'd suggest using abindings suffix.. Maybe a shorter package name liketf_bindings will cut it?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'm quite sure that is a nonproblem. With that said I'll change this whole thing to a submodule and split it, that allows better long-term extensions.


# Create tensorflow layer from odl operator
odl_op_layer = odl.contrib.tensorflow.as_tensorflow_layer(
odl_op, 'MatrixOperator')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

indentation

print(odl_op(x))

# Evaluate the adjoint of the derivative, called gradient in tensorflow
print(tf.gradients(y_tf, [x_tf], z_tf)[0].eval().ravel())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Perhaps use keyword arguments to make the code more self-explanatory. Now it's a bit of guesswork as to what variable stands for what.

self.name = name

def _lincomb(self, a, x1, b, x2, out):
with tf.name_scope('{}_lincomb'.format(self.name)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Will this clash if you use the same name twice?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

No, it becomes_lincomb_2 etc

return isinstance(other, TensorflowSpace) and other.shape == self.shape

def __repr__(self):
return 'TensorflowSpace({})'.format(self.shape)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Consider subclassers

self._data = value

def __repr__(self):
return '{}.element({})'.format(self.space, self.data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

{!r}


"""Wrap ODL operator so that it acts on TensorflowSpace elements."""

def __init__(self, domain, range, func, adjoint=None, linear=False):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Better use the same name for parameter and stored attribute.

return tensorflow_layer


class TensorflowSpace(LinearSpace):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

A bit more docs for the fellas here and below wouldn't hurt.

@kohr-h
Copy link
Member

And, to repeat one of the review comments, a top-level README is quite important for this stuff.

@adler-j
Copy link
MemberAuthor

Thanks for the review, I'll try to get this done and merged rather soon but I'm waiting for the FOM pull (and more free time for me) since I'll move some stuff into there.

@kohr-hkohr-h changed the titleAdd support for tensorflow togeather with ODLAdd support for tensorflow together with ODLJun 21, 2017
@adler-jadler-j mentioned this pull requestJul 3, 2017
20 tasks
@adler-j
Copy link
MemberAuthor

Fixed the comments, added some tests etc. IMO this can go in now.

Copy link
Member

@kohr-hkohr-h left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Some final comments, merge when done.

Clearly a very cool addition!


## Example usage

The [examples](examples) folder contains example on how to use the above functionality.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

examples

* [tensorflow_layer_matrix.py](examples/tensorflow_layer_matrix.py) shows how an ODL `MatrixOperator` can be converted to a tensorflow layer.
* [tensorflow_layer_productspace.py](examples/tensorflow_layer_productspace.py) shows how an ODL operator acting on `ProductSpace`s can be converted to a tensorflow layer.
* [tensorflow_layer_ray_transform.py](examples/tensorflow_layer_ray_transform.py) shows how a `RayTransform` can be converted to a tensorflow layer.
* [tensorflow_operator_matrix.py](examples/tensorflow_operator_matrix.py) shows how `tf.matmul` can be used as a ODL operator.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

an ODL


step = learning_rate * np.sqrt(1 - beta2) / (1 - beta1)

x.lincomb(1, x, -step, m / (np.sqrt(v) + eps))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Did you check this with the paper? Regarding your earlier answer to my review comment, yes, I read the paper, and I couldn't match the expressions there with the implementation here.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'll review this once more then I guess!

@@ -62,6 +63,10 @@ def __init__(self, geometry, reco_space, proj_space):

self.create_ids()

# Create a mutually exclusive lock so that two callers cant use the
# same shared resource at the same time.
self.mutex = Lock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

bump (prefer this a bit more hidden,_mutex)

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Agree, will fix that.

@@ -802,6 +803,42 @@ def _apply_padding(lhs_arr, rhs_arr, offset, pad_mode, direction):
working_slc[axis] = intersec_slc[axis]


def zscore(arr):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Is that your naming or is it a standard notion?

Copy link
MemberAuthor

@adler-jadler-jAug 24, 2017
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

kohr-h reacted with thumbs up emoji
@adler-j
Copy link
MemberAuthor

Merge after CI

@adler-jadler-j merged commitea016dc intoodlgroup:masterAug 29, 2017
@adler-jadler-j deleted the tensorflow_support branchAugust 29, 2017 17:26
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@kohr-hkohr-hkohr-h approved these changes

Assignees
No one assigned
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@adler-j@kohr-h

[8]ページ先頭

©2009-2025 Movatter.jp