Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[Serializer] Change visibility of getNormalizer and getDenormalizer#24206

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation

@welcoMattic
Copy link
Member

@welcoMatticwelcoMattic commentedSep 14, 2017
edited by nicolas-grekas
Loading

QA
Branch?3.4
Bug fix?no
New feature?yes
BC breaks?no
Deprecations?no
Tests pass?yes
Fixed tickets
LicenseMIT
Doc PR

I'm currently working on a projet that have many normalizers. Enough to get almost 20sec ingetDenormalizer method:

capture d ecran 2017-09-14 a 15 04 35

I would like to write my own Serializer, and I figure out thatgetNormalizer andgetDenormalizer methods are private.

Here what I would like to do:

<?phpclass ChainSerializerextends Serializer{publicfunction__construct(array$normalizers = [],array$encoders = [])    {$indexedNormalizers = [];foreach ($normalizersas$normalizer) {/* Little workaround to get clean indices */$normalizerClassName =get_class($normalizer);$modelClassName =str_replace('Normalizer','',$normalizerClassName);$modelClassName =str_replace('\\\\','\\Model\\',$modelClassName);$indexedNormalizers[$modelClassName] =$normalizer;        }parent::__construct($indexedNormalizers,$encoders);    }protectedfunctiongetNormalizer($data,$format,array$context)    {return$this->normalizers[get_class($data)];    }protectedfunctiongetDenormalizer($data,$class,$format,array$context)    {return$this->normalizers[$class];    }}

I figure out too these$normalizers areprotected which enforce me to submit this PR.

Ping@dunglas@lyrixx

@lyrixx
Copy link
Member

I found this use case legit and right now I don't see an easy way to achieve this perf optim (quite huge in this project)

@nicolas-grekasnicolas-grekas added this to the3.4 milestoneSep 15, 2017
Copy link
Contributor

@ogizanagiogizanagi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

It looks sensible to me.

@dunglas
Copy link
Member

dunglas commentedSep 15, 2017
edited
Loading

It used to have a cache for this method, it has been removed in#17140. But the removal of this cache introduced a huge performance penalty (the one you suffered).
Maybe can it be a good idea to reintroduce this cache but make it disabled by default?

@nicolas-grekas
Copy link
Member

The reason why we remove the cache would be still valid, so that this option would be a foot gun I guess.
Instead, what about adding an interface that would provide a method that would return a cache key? Then if it is implemented, we use the cache.

@dunglas
Copy link
Member

@welcoMattic, can you share your profile publicly?

@dunglas
Copy link
Member

dunglas commentedSep 15, 2017
edited
Loading

@nicolas-grekas I think it won't help, the previous cache allowed to skip the whole loop. Here we still need to loop and to callsupports*.supports* is usually lightweight. Calling a method to get a cache key instead will have no benefit.

Another proposal: introduce aNonCacheableSupportsReturnInterface interface and use the cache if no normalizer implements it.

@nicolas-grekas
Copy link
Member

nicolas-grekas commentedSep 15, 2017
edited
Loading

we talked with@dunglas and might have found an idea:
we could add an new interface with agetProvidedTypes
this method would return an array keyed by PHP-type/class, valued by a bitfield of "vary" consts: DATA | CONTEXT.
this would tell the outside what the "supports*" methods vary on, so that good enough cache/optimizations strategies could be implemented with that info.

@ro0NL
Copy link
Contributor

is the vary strategy worth it? It's nice.. but feels a bit over-engineered =/ Eventually theSerializer is simple chain implementation, which is fine to me.

Exposing here is the easiest no? Or introduce a new abstraction layer?

While at it;Serializer::denormalizeObject() can be inlined i guess.

@dunglas
Copy link
Member

@ro0NL exposing only fix partially the problem, from my experience, this performance problem occurs often when you use the serializer intensively (yes I've API Platform in mind).

Unlike exposing, fixing this problem would benefit to every users without requiring custom developments.

@ro0NL
Copy link
Contributor

What about a newAbstractKnownObjectNormalizer 😱 or so. It would havefinal supports method and exposes it supported object types.

So kinda following the same path proposed by@nicolas-grekas, but perhaps easier to implement.

@nicolas-grekas
Copy link
Member

nicolas-grekas commentedSep 15, 2017
edited
Loading

I'd be interested in knowing if a simple approach like#24228 could be enough?
Can any of you measure any significant improvement with it?

@ro0NL
Copy link
Contributor

Alsoextends Serializer might not be the right trade-off. If$normalizerClassName = get_class($normalizer); is your convention, perhaps implement the required interfaces directly? Most would be simple method forwarding anyway.

Yet i understand you prefer this extension point, in favor to copy/pastingSerializer::__construct or so. A new abstraction layer effectively is the same as this PR, probably not worth it.

@welcoMattic
Copy link
MemberAuthor

@dunglas hmm, sorry it's a private application. But I can make a new simple Sf app with as many normalizers.

Thanks all for discussing about this performance issue. I'll try@nicolas-grekas PR soon.

dunglas reacted with thumbs up emoji

@lyrixx
Copy link
Member

lyrixx commentedSep 18, 2017
edited
Loading

Hi

I'd be interested in knowing if a simple approach like#24228 could be enough?

In our case it's not enough. It's logical because our normalizers don't have this cache layer. The only way to make it faster in our case is to allow us to override thegetDenormalizer /getNormalizer method.

welcoMattic reacted with thumbs up emoji

Copy link
Member

@nicolas-grekasnicolas-grekas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Just blocking merge until#24206 (comment) has been addressed as OK or KO as a more generic fix for the perf issue.

nicolas-grekas added a commit that referenced this pull requestSep 20, 2017
…kas)This PR was merged into the 3.4 branch.Discussion----------[Serializer] Add local cache to normalizers| Q             | A| ------------- | ---| Branch?       | 3.4| Bug fix?      | no| New feature?  | no| BC breaks?    | no| Deprecations? | no| Tests pass?   | yes| Fixed tickets |#24206| License       | MIT| Doc PR        | -Should help making the Serializer a bit faster.Commits-------b0c5cf0 [Serializer] Add local cache to normalizers
@nicolas-grekas
Copy link
Member

@welcoMattic@lyrixx@dunglas any OK/KO about#24206 (comment)?

Status: waiting feedback

@lyrixx
Copy link
Member

we talked with@dunglas and might have found an idea:
we could add an new interface with a getProvidedTypes
this method would return an array keyed by PHP-type/class, valued by a bitfield of "vary" consts: DATA | CONTEXT.
this would tell the outside what the "supports*" methods vary on, so that good enough cache/optimizations strategies could be implemented with that info.

I'm not sure to follow you. Because how would you generated a cache-id when the normalizer vary on the the data? If it's an object it's quite simple (spl_object_hash) but if it's an array ...

@nicolas-grekas
Copy link
Member

how would you generated a cache-id when the normalizer vary on the the data

We would not. That'd then be the responsibility of the normalizer class. But in you example, you don't vary by "data", but only by "type(data)", which would be covered by the interface so caching would happen.

@lyrixx
Copy link
Member

lyrixx commentedSep 26, 2017
edited
Loading

Ok, so your cache layer will only cache something based on the context ?

@nicolas-grekas
Copy link
Member

Almost like http vary yes, with 3 vary-by possible: type(data) and/or context yes, state inside data cannot be considered from this layer

@nicolas-grekas
Copy link
Member

Oh, but there would still be a benefit when the data is considered: if you say "for this type, it varies by data", then the cache layer will still be able to skip the normalizer for any other non specified type.

@lyrixx
Copy link
Member

It looks good to me. We will try to update this PR as soon as possible.

@welcoMattic
Copy link
MemberAuthor

welcoMattic commentedSep 26, 2017
edited
Loading

Thanks for all your ideas, I'll see with@lyrixx to update the PR before the feature freeze

dunglas reacted with thumbs up emoji

@nicolas-grekas
Copy link
Member

I think it's too late for 3.4, adding this vary-by cache should be done for 4.1 now.
I'm closing this PR as we figured out a way to not open these methods, which is always the solution we prefer for maintenance purpose, and replacing it by an issue with a description of what we discussed.

@nicolas-grekas
Copy link
Member

See#24436

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@nicolas-grekasnicolas-grekasnicolas-grekas requested changes

@dunglasdunglasAwaiting requested review from dunglasdunglas is a code owner

+1 more reviewer

@ogizanagiogizanagiogizanagi approved these changes

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Projects

None yet

Milestone

3.4

Development

Successfully merging this pull request may close these issues.

7 participants

@welcoMattic@lyrixx@dunglas@nicolas-grekas@ro0NL@ogizanagi@carsonbot

[8]ページ先頭

©2009-2025 Movatter.jp