Movatterモバイル変換


[0]ホーム

URL:


$30 off During Our Annual Pro Sale. View Details »
Speaker DeckSpeaker Deck
Speaker Deck

How to Think Like a Performance Engineer

Avatar for Harry Roberts Harry Roberts
July 17, 2024

How to Think Like a Performance Engineer

As awareness and tooling around site speed have been improving at a very exciting rate, has performance testing actually become any easier? Any more straightforward? As someone who spends every day auditing client projects, I think areas of confusion have actually increased in many places. Which tools should we be using? Can we trust them? How do we run tests that serve as realistic and actionable predictors? And how do we know when we’ve won?

In this talk, we’ll look at highly practical tools and workflows to ensure that every test we run has a purpose and gives us data we can truly leverage. By the end, we will all have a shared idea of what effective performance testing looks like, as well as customised and fine-tuned tooling to ensure replicable and predictable tests.

Avatar for Harry Roberts

Harry Roberts

July 17, 2024
Tweet

More Decks by Harry Roberts

See All by Harry Roberts

Other Decks in Technology

See All in Technology

Featured

See All Featured

Transcript

  1. how to think like a performance engineer

  2. hi—i’m harry

  3. @csswizardry

  4. tools of the trade

  5. crux

  6. chrome user experience report

  7. None
  8. treo

  9. None
  10. None
  11. None
  12. webpagetest

  13. None
  14. devtools

  15. None
  16. the metrics that matter

  17. agree on what you’re benchmarking

  18. core web vitals

  19. the easiest place to start

  20. — web.dev/vitals

  21. diagnostic metrics

  22. time to first byte

  23. None
  24. “While a good TTFB doesn ’ t necessarily mean you

    will have a fast website, a bad TTFB almost certainly guarantees a slow one.” — csswz.it/ttfb
  25. api calls, application runtime, cdn features, cheap hosting, ddos or

    heavy load, dns, database queries, filesystem reads, last-mile latency, latency, prioritisation, redirects, routing, server-side rendering, tcp, tls, wafs and load balancers
  26. domcontentloaded

  27. None
  28. K E Y T A K E A W A

    Y domcontentloaded fires after deferred js has finished running
  29. None
  30. custom metrics

  31. W A R N I N G largest !== most

    important
  32. None
  33. None
  34. element timing api

  35. <input [...] elementtiming=firstInput>

  36. None
  37. bare-metal metrics

  38. W A R N I N G core web vitals

    are too broad for localhost
  39. lcp

  40. ttfb lcp

  41. ttfb cssTime lcp

  42. None
  43. <script> performance.mark('cssStart'); </script> <link rel=stylesheet href=app.css> <script> performance.mark('cssEnd'); const cssTime

    = performance.measure('cssTime', 'cssStart', 'cssEnd'); console.log(cssTime.duration); </script>
  44. <script> performance.mark('cssStart'); </script> <style>/*! app.css */</style> <script> performance.mark('cssEnd'); const cssTime

    = performance.measure('cssTime', 'cssStart', 'cssEnd'); console.log(cssTime.duration); </script>
  45. K E Y T A K E A W A

    Y custom timings better capture the bare-metal impact of your work
  46. the 75th percentile

  47. None
  48. None
  49. p75 is a big number*

  50. “Data from ATI also confirms that BBC.com had […] 1.5

    billion page views […] in March 2020.” — bit.ly/3S8xexY
  51. 1,125,000,000

  52. *but it’s not really big enough

  53. 375,000,000

  54. aim around p95

  55. reliable, realistic, or repeatable?

  56. pick three!

  57. test conditions

  58. urls device type connection speed geographic locale

  59. W A R N I N G this gets a

    little fiddly
  60. urls

  61. you should know your own urls: source them from search

    console, analytics, commercial importance
  62. None
  63. device type

  64. None
  65. connection speed

  66. None
  67. W A R N I N G most tools run

    too slowly
  68. 3.2

  69. None
  70. score di ff erence (+) 3g fast 8.715 5.515 4g

    6.230 3.030 4g fast 5.280 2.080 lte 5.018 1.818 cable 5.670 2.470
  71. K E Y T A K E A W A

    Y most tools’ defaults are too pessimistic; try align lab tests with field data
  72. geographic locale

  73. None
  74. None
  75. throttling? not so fast!

  76. dev tools throttling is very synthetic

  77. None
  78. None
  79. — csswz.it/464NNRb

  80. live website ✅ ❌ localhost ❌ ✅ lots of third-

    parties or apis ✅ ❌
  81. None
  82. None
  83. None
  84. cold-start scenarios

  85. most tools start from a very pessimistic baseline

  86. None
  87. None
  88. empty cache no user interactions no open dns, tcp, tls

    cookies not accepted
  89. K E Y T A K E A W A

    Y important to test, but don’t over-focus on cold-start
  90. K E Y T A K E A W A

    Y cold starts should be your least frequent scenarios
  91. K E Y T A K E A W A

    Y cold starts are your most pessimistic scenario
  92. K E Y T A K E A W A

    Y cold starts tend to show your most obvious issues
  93. test scenarios

  94. interaction mid-session user journeys cookie banners logged-in state experimentation repeat

    page-view
  95. the application panel

  96. None
  97. cookie banners

  98. None
  99. None
  100. None
  101. None
  102. setEventName Home navigate %URL% setEventName AcceptCookies execAndWait document.querySelector('#nhsuk-cookie-\ banner__link_accept_analytics').click()

  103. None
  104. most page views will not incur a cookie banner

  105. None
  106. None
  107. // Auth cookie, consent, experimentation, etc. setCookie %ORIGIN% <name>=<value> navigate

    %URL%
  108. None
  109. shopping carts

  110. https://www.website.com/checkout

  111. None
  112. “There are no products in your shopping cart yet.”

  113. None
  114. None
  115. // Add something to localStorage before // the app needs

    to read it back out. localStorage.setItem('key', 'value');
  116. None
  117. W A R N I N G this is still

    an unlikely (i.e. cold-start) scenario
  118. user interactions and user journeys

  119. how did they get there in the first place?

  120. None
  121. None
  122. None
  123. None
  124. None
  125. W A R N I N G even this isn’t

    the same as a real user clicking around
  126. None
  127. no amount of cold-start testing could get anywhere close to

    replicating the problem
  128. api calls, application runtime, cdn features, cheap hosting, ddos or

    heavy load, dns, database queries, filesystem reads, last-mile latency, latency, prioritisation, redirects, routing, server-side rendering, tcp, tls, wafs and load balancers
  129. api calls, application runtime, cdn features, cheap hosting, ddos or

    heavy load, dns, database queries, filesystem reads, last-mile latency, latency, prioritisation, redirects, routing, server-side rendering, tcp, tls, wafs and load balancers
  130. -status-code:200 domain:www.first-party.com

  131. None
  132. soft navigations and spa

  133. W A R N I N G you can’t use

    the navigate command with spa
  134. None
  135. None
  136. setEventName Home // Trigger hard navigation navigate %URL% setEventName About

    // Trigger soft navigation execAndWait \ document.querySelector('[href=about/]').click()
  137. wrapping up

  138. K E Y T A K E A W A

    Y the numbers you see represent a huge array of experiences…
  139. K E Y T A K E A W A

    Y …you can’t keep running one test
  140. K E Y T A K E A W A

    Y design tests that suit your context
  141. K E Y T A K E A W A

    Y test the right things under the right conditions and in the right scenarios
  142. K E Y T A K E A W A

    Y the fun stuff is the most well hidden
  143. thank you

  144. harry.is/for-hire


[8]ページ先頭

©2009-2025 Movatter.jp