Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Feb 2, 2024. It is now read-only.
/sdcPublic archive

WIP: interface for map-reduce style kernels#284

Open
Hardcode84 wants to merge11 commits intoIntelPython:master
base:master
Choose a base branch
Loading
fromHardcode84:dist_refac

Conversation

Hardcode84
Copy link
Contributor

This PR adds new APIs to be used by pandas functions implementers to help parallelize theirs kernels:

  • map_reduce(arg, init_val, map_func, reduce_func)
  • map_reduce_chunked(arg, init_val, map_func, reduce_func)

Parameters:

  • arg - list-like object (it can be python list, numpy array or any other object with similar interface)
  • init_val - initial value
  • map_func - map function which will be applied to each element/elements range in parallel (on different processes of on different nodes)
  • reduce_func - reduction function to combine initial value and results from different processes/nodes

The difference between these two functions:

  • map_reduce will apply map function to each element in range (map function must take single element and return single element) and then apply reduce function pairwise (reduce function must take two elements and return single element)
  • map_reduce_chunked will apply map function to range of elements, belonging to current thread/node (map function must take range of elements as paramenter and return list/array as result) and then apply reduce to entire ranges (reduce function must take two ranges as parameters and return list/array)

You can also call any of these functions from inside map or reduce func to support nested parallelism.

These functions usable for both thread/mpi parallelism.

If you call them from numba@njit function they will be parallelized by numba buiilt-in parallelisation machinery.

If you call them from@hpat.jit they will be distributed by hpat parallelisation pass (doesn't work currently)

Wrote parallel series sorting (numpy.sort + hand-written merge) as example.

Current issues:

  • Thread parallel sort isn't working due to numba issueInvalid result with parfor numba/numba#4806
  • MPI parallelisation doesn't work entirely (lot of issues, bigger one is that hpat support only very limited list of built-in functions (sum, mult, min, max) for parfor reductions)
  • Parallel sort handles NaNs differently fromnumpy.sort, need to fix
  • Threads/nodes count inmap_reduce_chunked handcode as 4, will fix
  • Proper documentation

The second part of this PR is distribution depth knob to (not-so)fine-tune nested parallelism between distribution and threading:

  • New environment variableSDC_DISTRIBUTION_DEPTH controls how much nested parallel loops will be distributed by DistributionPass
  • Distributed loops are any of newly introducedmap_reduce* functions or manually writtenprange loops.
  • Default value is1 which means that only the most outer loop will be distributed by mpi, then next loop will parallelised by numba, and then all deeper loops will be executed sequentually (as numba doesn't support nested parallelisation)
  • SetSDC_DISTRIBUTION_DEPTH to0 to disable distribution.
# SDC_DISTRIBUTION_DEPTH=1for i in prange(I) # distributed by DistributedPass    for j in prange(J) # parallelised by numba        for k in prange(K) # executed sequentually

@pep8speaks
Copy link

pep8speaks commentedNov 12, 2019
edited
Loading

Hello@Hardcode84! Thanks for updating this PR. We checked the lines you've touched forPEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2019-11-12 11:16:09 UTC

a = len(l) // n
b = a + 1
c = len(l) % n
return [l[i * b: i * b + b] if i < c else l[c * b + (i - c) * a: c * b + (i - c) * a + a] for i in range(n)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

it is quite understandable code, isn't it? :-)
please don't call variables by single letter

Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.
Reviewers

@shssfshssfshssf left review comments

@AlexanderKalistratovAlexanderKalistratovAwaiting requested review from AlexanderKalistratov

At least 1 approving review is required to merge this pull request.

Assignees
No one assigned
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

3 participants
@Hardcode84@pep8speaks@shssf

[8]ページ先頭

©2009-2025 Movatter.jp