Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Aug 29, 2025. It is now read-only.

Reduced portraits#78

Open
johann1764 wants to merge3 commits intonaev:main
base:main
Choose a base branch
Loading
fromjohann1764:red
Open

Reduced portraits#78

johann1764 wants to merge3 commits intonaev:mainfromjohann1764:red

Conversation

@johann1764
Copy link

@bobbens
Copy link
Member

As this is automatic, it probably makes more sense to be part of the build system when doing releases. That way we wouldn't have to manage them in the repo and have more trouble here.

Alternatively, like how the gltf stuff can resize stuff online, maybe doing it online may make sense, without even compressing them offline. However, we would either have to use a library or implement the scaling algorithms if we want something more complicated than bilinear or the likes.

@johann1764
Copy link
Author

johann1764 commentedMay 21, 2025
edited
Loading

My point is that Imagemagick does this correctly, contrary to gimp for example.
To me, the simplest and safest path is to use a reliable program that does that specialized task for us.

As this is automatic, it probably makes more sense to be part of the build system when doing releases. That way we wouldn't have to manage them in the repo and have more trouble here.

Running an update script each time there is new portraits is not that much of a trouble. There is not that much traffic there. Worst case situation: someone forgot to update, and therefore some reduced size graphics are missing. In that case, it works as it does now. So worst case is current situation, normal case is better than that.

I see the good sides of having it part of the build system; for example, that would allow smaller image size for non-doubled graphics versions.

My opinion on that is this is overkill for now. If one day we have more specialized management of low quality graphics/high quality graphics, it will make sense to have different builds for that.

But for now, having just an option to use this alternative dir or not is very simple and solves the problem, without adding any complexity to the build system.

By the way, notice that despite its small output size, this process is time-consuming.

@bobbens
Copy link
Member

OK, I'm working on a lanczos implementation for Naev (gpu-based) which should be the same as used by imagemagick. That should let us do it online, and apply it more generally to downscaling all images in the toolkit, without having to generate new images or implement back up images in a case-by case while still not handling all potential scaling factors.

test1

It still needs a lot of improvement, but the above image shows (from left to right): linear interpolation (current), nearest, lanczos GPU implementation (new, WIP), and finally imagemagick convert (with -scale). There's a bit of issue with the alpha I have to work, a wee bit of ringing (need better parameters), but it is better with the lines on the clothes.

@johann1764
Copy link
Author

johann1764 commentedMay 22, 2025
edited
Loading

That is a huge work that does not really needs to be done now. Must be interesting, though.

To me, it looks like a big challenge to do as good as image processing specialists do (they wrote a good article about this, if you're interested, I can try to find it again). Moreover, doing this require to "un-gamma" the picture first, then "re-gamma" it afterwards. Getting the information from the picture depends on the format, and ensuring that this is done properly in all cases is some maintenance work that is probably better left to specialized tools. I am confident thatconvert maintainers will continue to offer this feature and adapt it to any picture formats.

Even if I see the potential benefits of this work (like having the 200->140 scaling done properly, which I am particularly interested in), I think it is not time effective, meaning it will take a lot of your time for a not-so-huge benefit. It is a matter of ratio. However, I can understand if you just find it interesting and want to experiment.

Since you already have good results, the current best result/time ratio would consist in:

  • Stay on the pre-computed pictures approach. Currently, I just symlinkeddat/gfx/portraits toartwork/gfx/portraits_red, and it does the trick. People not needing reduced versions can just remove the symlink, and that's it.
  • Always use your interpolation technique for the 200->140 resizing, as it is already better than any hardware default technique (as far as I know).

All that can be done in no time and will significantly improve rendering for every one.

@bobbens
Copy link
Member

That is a huge work that does not really needs to be done now. Must be interesting, though.

Found a better approach, will implement it a bit. Actually, looking at the default "scale" implementation of imagemagick, it actually is overblurring as it is averaging.

To me, it looks like a big challenge to do as good as image processing specialists do (they wrote a good article about this, if you're interested, I can try to find it again). Moreover, doing this require to "un-gamma" the picture first, then "re-gamma" it afterwards. Getting the information from the picture depends on the format, and ensuring that this is done properly in all cases is some maintenance work that is probably better left to specialized tools. I am confident thatconvert maintainers will continue to offer this feature and adapt it to any picture formats.

Naev already works in linear colourspace so nothing is needed there :)

Even if I see the potential benefits of this work (like having the 200->140 scaling done properly, which I am particularly interested in), I think it is not time effective, meaning it will take a lot of your time for a not-so-huge benefit. It is a matter of ratio. However, I can understand if you just find it interesting and want to experiment.

Don't worry, implementation should probably done today/tomorrow. It has the potential to actually be better than imagemagick, just need to run some tests. Also, since it would be cached, it should add negligible computation time as it is done in shaders with a separable approximation.

Once implemented, I plan to have the "image" widget automatically do it for all images. That means it has the potential to improve a lot more places (and once rust toolkit is implemented, likely extended to a lot more), and also handle the "low memory mode" textures. Let me see how the final results go, but I think that if it works well, there should be no need for the downscaled images.

@johann1764
Copy link
Author

I am looking forward to see it.

Don't worry, implementation should probably done today/tomorrow.

Which is the time you need to develop a full campaign line. That sorts of prove my point. :-)

@bobbens
Copy link
Member

I am looking forward to see it.

Don't worry, implementation should probably done today/tomorrow.

Which is the time you need to develop a full campaign line. That sorts of prove my point. :-)

I wish I could do a full campaign in 2 days, it's usually more like 6 months :P

@johann1764
Copy link
Author

I follow the commits. When you're at it, it goes really fast. Like more than a scenario per day. It takes six months because you're doing tons of other stuff at the same time.

@bobbens
Copy link
Member

OK, I've implemented this on a toolkit level. So any image in an imagearray or the image widget should render with fancy downscaling. Since the toolkit is cached in a framebuffer that only recomputes as necessary, it shouldn't add much overhead.

The implementation is based on themagic kernel 2021 version and is implemented in a single pass shader. The reason the transparency was flakey was because the kernel is non-separable and I was separating it, causing issues with diagonals. Now with a single one it should be better.

best_resize

As before, results are linear, nearest, NEW, and imagemagick scale. You can see it's much sharper while the imagemagick is blurry (although in this particular case you could argue the effect it gives on the hologram is better). However, there are no artefacts at all, and this should improve all aspects of the landing stuff.

Tell me how it works for you.

Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

@johann1764@bobbens

[8]ページ先頭

©2009-2025 Movatter.jp