- Notifications
You must be signed in to change notification settings - Fork17
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
APCA KEY USE CASE DEFINITIONSUSE CASES CLARIFIED & RELATED CONDITIONALS (APCA)It is of particular importance for complex information design to present a clear visual hierarchy, and the use of contrast variations is one of the important methods for doing so. As such, the definition of contrast requirements per use cases is the ideal means to achieve a reasonable guideline. In WCAG 2, SC 1.4.3 attempted to achieve this with the claim of "3-way" contrast at 4.5:1 (#FF/#76,#76/#00), however this fails for several reasons.
WCAG 2 "Three Way" example:
Use Case By Functional NeedWhat we introduce with APCA is the concept of guidelines per use-case, and based on the functional need. For instance, the assumed functional need for columns of body text is a high quality of fluent readability, where the text can be read effortlessly, with minimal fatigue, and at maximum speed and comprehension. At the other end of the spectrum is text for things such as placeholders, or disabled items, or copyright notices. It should be abundantly clear that such text has a much lower need for readability. Making these distinctions among use cases is of critical importance, no merely for aesthetic design reasons, but for arranging the content within a visual hierarchy. This is particularly important for lexical, cognitive, and accessibility reasons of its own. A page of content with everything at the same size and contrast is difficult to read and access. Good readability requires not only adequate contrast for the content text, but a layout and visual hierarchy that leads the user through the content in an accessible manner. APCA guidelines are intended to present a complete visual readability best practice. USE CASE CLASSES and CONSIDERATIONSThe "fluent" category is divided into body text and non-body text, with the highest contrast reserve ascribed to body text, critical contrast of 18 times threshold at critical size, where threshold is a LogCS of ~1.3 or ~5% (roughly related to 20/70¹), or an APCA Lc 5. (1)It is important to note that acuity and contrast sensitivity are independent, despite what is stated in WCAG 2, which is notwithstanding here). Fluent text values assume a font sized at the critical size for the given contrast. Sub fluent levels permit smaller than critical font sizes.
(2)Bailey/Lovie-Kitchin define minimum spot reading at 3 times threshold, here we set that low bar only for certain non-text elements). Semantic & differentiable non-text requires a similar contrast relative to spatial frequency as "other fluent text". Merely discernible non-text is more equivelent to sub-fluent text.
FLUENT
ZOOM
ZOOM
SPOT/ANCILLARY (non-fluent), SOFT (semi-fluent), and LARGE (fluent headlines)Currently under consideration are the breakpoints for:
NON LEXICAL
EMPIRICAL BASIS FOR THE FOREGOING (IN PART):
Contrast Sensitivity vs Spatial FrequencyPer the following chart, we can see that spatial frequency drives contrast perception. Here, various sizes of font and font weight are used for a practical demonstration. This premise applies to icons, pictograms, and other graphical elements as well. Not shown is hue/chroma contrast, which is much lower in spatial frequency sensitivity, and peaks well below 1 cycle/degree. whereas luminance contrast peaks at at 2 - 3 cycles/degree. "Normal" Vision DefinedNormal Vision is a specific definition, and a somewhat clinical definition:
Footnotes:
copyright © 2019-2021 by Andrew Somers and Myndex Research™. All Rights Reserved. Myndex InfographicsCritical Font SizeNOTE: review is ongoing in particular to align actual fonts to relevance with this table. Physical Device Visual Angle per DistanceCITED RESEARCH (selects)TERMS FROM RELATED RESEARCH OR STANDARDSFrom Choeng/Lovie-Kitchin/Bailey/Bowers, also Legge, Arditi, Whittaker, and others. This includes some of the terms as commonly used in research.
--------------- FOOTNOTES
TERMINOLOGY EMERGING FROM SAPC/APCA RESEARCHThis was moved to anew post below |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1👀 1
Replies: 5 comments 7 replies
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
From the current APCA tool page: Accessible ContrastRelative to Font Size and WeightThe table below is current as of January 2022, though as this is a public beta, there may be occasional discrepancies with the above automated font display above. GENERAL GUIDELINES on LEVELSThese general levels are appropriate for use without reference to the lookup table.
NOTES ON FONT SIZEFont sizes listed above assume an x-height ratio of at least 0.5. Font weight is based on highly standardized reference fonts such as Helvetica or Arial. "px" means the CSS reference px not device pixels. The reference px is defined as 1.278 arc minutes of visual angle. NOTES ON WCAG_2 COMPATIBILITYTo use the APCA tool for minimum Bridge-PCA conformance (backwards compatible for WCAG_2):
Using the Lookup Table
Font Lookup TableAbout the colors in these tables:The colors in these tables are designed to be discernible by all forms of color vision deficiency.
Additional Notes
Other guidance |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
Non Text CategorizationThis is something that's been on my mind lately, as I've been thinking of irreducible base categorization for non-text design guidance: Four broad categoriesThese four categories have a similar hierarchal need of contrast as doesbody text followed byfluent text followed byspot text followed byancillary text. Each of these categories is distinct enough to have different design guidance needs.
Uniqueness of |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
Having some guidance on focus state contrast would be good, to answer questions like:
|
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
I think it makes sense to distinguish, at least for now, WCAG 2.2 should have a requirement for focus indicator that APCA can build on. Recent version is here:https://raw.githack.com/w3c/wcag/focus-appearance-rework/understanding/22/focus-appearance-minimum.html |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
This post was moved to#104If you followed a link here, please let the linker know the link changed TERMINOLOGY EMERGING FROM SAPC/APCA RESEARCHOld post contents hidden hereThis was moved from the first post above to make it easier to link to: In the course of research here, we have a few terms that are specific to the use case(s) and I'd like to clearly define them. These terms were created in the interest of clear andplain terminology that is descriptive and easily understood with little to no special explanation. I.e. the terms themselves are intended to be easy to grok, to help keep things simple, short, and digestible. (Some of the definitions need to be reworked, and moreover, visual aides created.) COLOR and LIGHT
TEXT SIZE and CONTRAST
PAGE and LAYOUT
USER PERSONALIZATION
|
BetaWas this translation helpful?Give feedback.
All reactions
-
Hello, after quite some research as a complete beginner in everything regarding colors, I stumbled over this Myndex repository with lots of new information for me. I have been trying to achieve my goal of automatically and consistently creating (at least more or less) perceptually linear bit by bit lighter or darker shades of a random color, instead of having to find recognizable small changes in shade manually by trial and error in some regular color picker, but wasn't successful so far. I started this whole "journey" like probably any complete beginner with no knowledge in the RGB color space using the HSV model, then I learned about blending, then I learned about gamma correction, then I learned about luminance, then I learned about contrast ratio. Still, no success was in sight, so then I learned about bigger color spaces and the XYZ color space and I learned, that my goal should be possible to achieve with the LAB color model. Therefore, I now meanwhile have some working code, that - according to different color conversion calculators - can correctly represent and transform any RGB/HSV color into the corresponding LAB/LCH value and can also correctly calculate relative luminance and contrast ratio according to web standards. I now thought, that in this model I could just increase lightness in equidistant steps to receive perceived gradually lighter shades of grey (to start with), but the result was unfortunately underwhelming. I tried to increase lightness in the LCH model in equal steps and in percentage increases, and I also tried to decrease contrast ratio against the background and against the previous darker shade in percentage steps, but still can't closely recreate my manual result. Therefore, I would like to ask, if you could give me some hint, into which direction I would not have to extend my research to achieve my goal? |
BetaWas this translation helpful?Give feedback.
All reactions
-
First, my sincere apologies in the long delay in responding to your message. This probably would've been better as its own separate thread, nevertheless I'm sorry that it slipped through the cracks. First of all, what you're looking for is a perceptually uniform color appearance model. XYZ is linear to light and as a result is not perceptually uniform. LAB (and the related LCHab) was an attempt to create a perceptually uniform model, but it's lacking in a few areas, though is still useful and in substantial use today in industrial applications, as it is fairly good for calculating very small differences in color. HSV is not at all uniform. As far as contrast ratio, if you're referring to WCAG2, it is not at all perceptual uniform. As such they should not be used in an automated context. We're out of any tools you need perceptual uniformity. OKlab is a bit better than CIELAB. L* as a perceptual lightness correlate is based on muscle value which refers to diffusely reflecting large patches of paint in a very defined environment. L* does not seem to work as well on self illuminated displays. Also, color difference is not the same as perceived contrast. ∆L*, i.e. the Lstar difference does not necessarily provide a perceptually uniform contrast measure or design elements such as text. SACAM which we have been working on is not ready for release, but you might find CAM16 interesting, or Jzazbz for instance. I'm not certain if that answers your question but please feel free to ask further, you may want to start a new thread as this does not directly apply to this thread. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
Thanks for your reply, if I understand you correctly, there is no really perceptual uniform color model, where I can just mathematically create visually equidistant differences in lightness for any color (or at least shades of grey). This is in line with my current understanding and would back, what the creators of Tailwind CSS wrote in their book „Refactoring UI“ about their manually selected color shades, as they also say, that after testing all kinds of color models, they came to the conclusion, that no color model can mathematically match a manual selection to create perceived equidistant shades. |
BetaWas this translation helpful?Give feedback.
All reactions
-
There are definitely models thatcan do this, but it's not nearly as simple as that. And even if you were to pick shades manually, when you came back to them days later you might be shocked to find that they don't seem to be what you thought you "picked". Take a look at this image, it's one of my favorite illusions. The colored dots that are under the shadow are giving out the exact same color from your monitor as the similar colored dots that are not in the shadow, and the same is true for the squares those dots are on. But for most people the colored dotsunder the shadow look as if they are glowing brighter, and the square that those dots are all on appears to be white/light, whereas the square that is not in the shadow appears to be a dark square. Here's that same image again but with some graphic bars laid across from inside to outside of the shadow, to demonstrate that the colors are in fact identical. And here you may notice an interesting phenomenon: on the grey bars as they go out of the shadow, it might appear the grey bars have a gradient on them making them darker as they cross the shadow threshold. But there is no gradient—if you take an eyedropper tool and sample, you'll see the colors are in fact identical. The gradient on the bar is an illusion created in your brain as the bar goes out of the shadow. So, the issue is our vision is so very context sensitive that a color model needs to have inputs not just for a pair of colors but for all of the surrounding colors we need to take into account all of the context in order to predict how it is going to appear. Some color appearance models are fairly complete and do provide those additional inputs. There are versions of APCA/SACAM which have multiple inputs for these reasons. The APCA that's currently public facing only takes a pair of colors, with a number of assumptions made in terms of the common operating environment. Keep in mind that color models are based on empirical studies and data collection of people's observations under controlled circumstances, so it's certainly possible to have models that fit within a particular set of circumstances. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Thanks for the additional input, the surrounding context is no issue for me and I actually understand that already, that the same color would be perceived differently in a different surrounding context, so that is something I would not challenge anyway. But my issue is only about creating perceived equidistant colors within a uniform context (so the whole background is evenly either white, or black or grey or blue or whatever) and here I can definitely create fine nuances of colored spots or lines on this uniform background manually, that seem equidistant in lightness and still seem equidistant in lightness the next day and the next week. Is there a color model, that can produce perceived equidistant results under these conditions of a uniform background? |
BetaWas this translation helpful?Give feedback.
All reactions
-
Yes but.... SACAM is such a model, but it is not in release yet. Most other models deal with diffusely reflective large patches (i.e. paint samples). One thing that has that issue is the coefficient that are needed for use with a self illuminated display or device like a phone or a little bit different than those needed for diffuse patches of paint. Part of the surrounding context I'm talking about is the gradient itself, and it matters if you're doing a continuous gradient or a step gradient. I had a couple of very old articles that compared a few different color models for a gradient use using step gradients, and they are on the myndex.com site, but they're kinda old, and I haven't updated them to the latest that we've been working on, and importantly the discussion didn't get into the more advanced models. I do however have this Github repo called "fancy font flipping", and the code creates a ton of stepwise gradients with different power curves to experiment with—you might find that helpful. There's also a live webpage and a codepen:fancy font flipping |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
This post was moved to it's own thread at#103Defining Requirements for a Complete Visual Contrast Guideline
Comments and thoughts welcome, (see new thread please) Old post contents hidden hereWCAG 2 Contrast —Key Failure Points
What IsNeeded In a Complete Visual Contrast GuidelineScience (empirically based models)
Spatial Frequency (size, weight, spacing, zoom)
Non-Text (spatial semantic nonsemantic)
Hue and Chroma (CVD, auto adjust, auto invert)
Needed Technology Adds
Not the Kitchen SinkThe list looks long, but this list is more about process, the actual guideline(s) should have much of this "hidden" and simplifed for ease of use by designers and testers. FOOTNOTES Copyright © 2021 to 2023 by Somers & MTI. All Rights Reserved. |
BetaWas this translation helpful?Give feedback.