layout: true class: center, middle --- background-image: url(media/edge.png) class: edge Performance APIs and you --- ## Who needs standard APIs anyway? ??? So, what are perf APIs good for? Well, for many years on the Web, we didn't have such APIs. Developers that wanted to measure how their web sites (back then, it was still called web sitesi :D) are doing or wanted to speed them up, did that using all kinds of hacks. Registering listeners to various browser events, creating unnecessary elements in order to download resources sooner, etc. Not everything worked, but the stuff that did made you feel really proud of yourself and really dirty at the same time. But, not everything that developers need to measure was indeed measurable. Not everything that needed to be sped up was possible. And most of all, the need to hack your way into that info or those optimizations was a big barrier to entry, which meant that many developers didn't bother to do that, and it showed. Performance of your average website was weak, and performance wasn't top-of-mind when it comes to the developer community. So a few years back, the Web Performance Working Group came to be, and started working on that problem. How can we know are sites are as fast as they can be? And which mechanisms we need to make them faster? Many APIs were created for that purpose. I'll try to cover all of them (or at least all the major ones). I will not have time to dive into the details of any API in particular (even if I'd love to), so if you have any questions about any of these APIs, maybe we'll have some time at the end, but otherwise you can catch me after the session and I'd love to answer any questions you may have. There's also a lot of info and tutorials about these APIs online. --- background-image: url(media/gauges.jpg) # Monitor ??? The first major use case I'd like to talk about is that of monitoring. You cannot speed up what you can't measure, and this "family" of APIs is coming to make sure that developers can measure everything that's meaningful to their user's experience. Why is measuring in the browser so important? Why not just setup synthetic testing with WebPageTest (which is awesome, btw) and measure your site's performance that way? In other words, why does the Real-User parts of RUM matter? Well, after years of running synthetic tests and calling it a day, it turns out that real users have a significantly different view of reality than our lab machines. Their networks are flakier and more variable, their browsing scenarios are not identical to what we have in mind when we set up test cases, and their devices are much more diverse than what we could ever fill our labs to have. On top of that, they move, they run our of battery, their devices heat up (which throttles down their CPU), etc. So, in order to *really* know what's going on, we need to measure in the field. We need to know the pains of our users, rather than the pains of our test setup. --- background-image: url(media/eiffel.jpg) # High resolution time ??? The first step when you want to measure how long something takes (which is a pretty critical measurement when it comes to performance), you need to get yourself a stopwatch. Similarly, that's one of the first things that was worked on as part of the WebPerfWG. At the time, the best timing precision you could get on the web was 1ms, which was nice, but wasn't cutting it for various use cases. So, the high resolution time spec defines an api that can be called `performance.now()` which gives you back a high resolution timestamp of what "now" is. It's available in both the browser's main context as well as in workers. (You may notice that I'm using the slide bg image as an indicator. An eiffel tower on a slide means that it's a well established spec, widely supported and you can rely on it today. Why eiffel tower? cause I'm kinda french so thought it would be appropriate. *shrug*) --- background-image: url(media/eiffel.jpg) # Navigation Timing ??? Another major API in measuring page performance is the navigation timing API, which gives you an overview of a single navigation and some of it's major milestones. So, DNS, connection establishment, when the request was sent, when response arrive, DOM loaded, window load event, etc. Some of those things were measureable before (by listening the events), but navigation timing really made it significantly easier to measure these values. --- background-image: url(media/eiffel.jpg) # Resource Timing ??? Next up is resource timing which enables you to do more or less the same thing but on a per resource basis. So for each resource you can now track all those metrics. This provides a lot of data for analysis that can answer questions like "Why was this page load slower than average?" and "What parts of the page load should we optimize?" Recent additions to it also include the resource dimensions, which enable all kinds of interesting analysis regarding bandwidth used, compression, etc. --- background-image: url(media/eiffel.jpg) ## Timing-Allow-Origin ??? In case you were actually paying attention, you might have wondered how does resource timing enable us to peak into the specific timings of all resources on the page, when so many of them are actually 3rd party resources. Would revealing that info about 3rd party resources be a security and privacy risk? Well, it would be. Which is why detailed timing information regarding 3rd party resources is only provided if the third party resource is opting into that, since it knows the data is not personalized and cannot be used to extract sensitive info regarding the user. That opt-in is the Timing-Allow-Origin header, which tells the browser that providing timing info on that resource is OK. --- background-image: url(media/eiffel.jpg) # User Timing ??? But what if you're trying to measure the time it took till some particular event happened on your site? e.g. the time it took until your script added a certain div to the DOM. Or till your search box got focus, or any one of the other possible metrics that you deem important for your site's engagement? That's what user timing is for. It enables you to create marks at various points of your page's lifecycle, and then either report those as is, or measure the timespan between different marks (and the different milestones reported by navtiming). Basically, it enables you to create and report your own custom metrics. --- background-image: url(media/eiffel.jpg) # Performance timeline ??? How do you report all those measurements? They are stored in something call the performance timeline. You can think of it as a large data structure storing entries that contain resource timing info as well as user timing info. You have functions such as `getEntriesByType` and `getEntriesByName` that enable you to get the entries, process them in JS and then upload the result to your server for storage and further analysis. --- background-image: url(media/future.jpg) # Performance observer ??? Now the performance timeline is great if you want to gather all the entries at some point in time and process all of them in aggregate, but what happens when you want for example to gather resource timing entries as they come. With the performance timeline your only option is to poll the timeline and see if new entries were added. Another limitation of the timeline buffer is that by default it only contains 150 entries, so if you want it to store more than that (and if you have many resources on your page, you do), you need to bump up its size. In order to address the pain in polling the timeline, a newer spec defines something called "Performance Observer". It follows the observer pattern which you may know from other DOM APIs such as mutation observers and allows you to subscribe to get async callbacks when certain entries are recorded. Currently, performance observer is deemed as the future, but we still have to figure out a mechanism that would enable you to get notifications on all resources without having to run a syncronous script at the top of that page to register and handle these events. --- background-image: url(media/eiffel.jpg) # Visual metrics? ¯\_(ツ)_/¯ ??? We talked about all kinds of measurements and metrics but what about visual metrics? Is there any way to know what the user have seen, and therefore understand how changes to your site impact that? As an industry, we love Web Page Test (raise of hand, who here works with WPT?), and having something similar in RUM would be great. Actual screenshots might be too much, but we'd love to have some indiciation of when the user saw something useful on screen. --- background-image: url(media/future.jpg) # First paint ??? One visual metric that has been available as a proprietary, flaky value in some browsers is first paint. But it usually just indicated the moment that the browser went blank, which isn't very useful. That is now being replaced by a standard proposal for a metric that will give us the time that the page's background was painted on screen. As developers it's an indication that stylesheets were downloaded, processed and applied. But more importantly, for our users, it means that page-specific elements start to get painted, so the user is likely to stick around for the rest of the page. --- background-image: url(media/future.jpg) # First contentful paint ??? The First Paint is followed by "First contentful paint", which means that page elements are also painted to screen. For some sites, that time can be similar to what First Paint is, but in other cases, especially when the content is JS injected, the FCP metric can be an indication the user is starting to see some content. --- background-image: url(media/future.jpg) # Declarative marks / Hero API ??? Both the visual metrics above are still not enough to answer the question "when does the user see the meaningful parts of the page?" Browsers tried to play around with heuristics that would give us some indication of a "First Meaninfgul Paint", but that turned out as something that's not trivial to specify as a generic metric. Instead, we are likely to see tools that would enable us to annotate various parts of the page, and have the browser report back on when these parts were painted. So we could use these annotations to create our own "First meaningful paint" metric and make sure those times are as short as they can be. --- background-image: url(media/future.jpg) # Long Tasks ??? ## Input latency ## Time to interactive Another portion that's lacking RUM data is runtime performance. It's not currently reported in any standard way, and while you can hack your way to get such data (e.g. by measuring the time different between events that are supposed to happen in regular intervals), it not pretty. Now we're talking about changing that with the Long Task Observer API that will report... well Long tasks. The browser's processing model means that it has to run operations such as running scripts, layout, etc on the main thread in what's called Tasks. While these tasks are running, nothing else can happen on the main thread. That means that if your users are trying to interact with the page at that point, nothing happens. As long as your tasks are fairly small, that's not a problem. But once your tasks get longer (say, 50ms or more), you will have a very hard time to properly respond to user input while having a smooth user experience. So, long tasks are bad, mmmmkay? Once the browser starts reporting them back to us, we can actually try and see which user scenarios lead to these long tasks and avoid them, by breaking apart large chunks of Javascript into smaller execution units, avoid large layout operations, etc. Long tasks observers can also be used as a primitive that can help us define other meaningful metrics such as the input latency of the main thread, how long it took for our page to be interactive (as opposed to simply painted on screen), etc. --- background-image: url(media/future.jpg) ## Frame Timing ?? ??? In the past we also discussed FrameTiming as an API that will enable us to spot jank in our apps, which is slightly related to long tasks. However, currently it seems like the FrameTiming has lost steam, and is not scheduled to be shipped soon. --- background-image: url(media/schedule.jpg) # Schedule ??? Enough about monitoring, let's talk about scheduling. There are a few standards around that which can make it significantly easier for you to run Javascript at the right time. --- background-image: url(media/eiffel.jpg) ## RequestAnimationFrame ??? The first, fairly known scheduling interface is RequestAnimationFrame or RAF as their mother fondly calls them. That API, which started in the WebPerfWG and then got integrated into the main HTML processing model, enables you to schedule JS code that will run right after the next frame is drawn on screen. That means that you would have full 16ms (assuming 60Hz screens) before the next frame is triggered, which normally leaves you plenty of time to perform any animation-related operations. Note that you shouldn't be running 16ms worth of JS code at that point, because you need to leave the browser enough time to perform its own internal operations before the next frame. Also, this timer doesn't fire when the tab is not visible, saving battery life and CPU cycles. --- background-image: url(media/future.jpg) ## RequestIdleCallback ??? A newer scheduling API called RequestIdleCallback enables you to find times where the browser's main thread is basically sitting around bored and make it run certain tasks only during those times. That helps you make sure that your not-that-important tasks run without interfereing with the browser's critical tasks, such as actually showing content on screen and reacting to user input. Note that you should run fairly small JS tasks in your callbacks here, since once your JS starts running, the browser can't stop it. So you want to make sure you run your code in small execution chunks, to avoid hogging the main thread. --- background-image: url(media/report.jpg) # Report ??? And now, reporting. Reporting is always exciting (who am I kidding, it's never exciting... But it is important). All those monitoring metrics we talked about are useless unless we actually report them back. --- background-image: url(media/eiffel.jpg) # Beacon API ??? Traditionally, the way many monitoring frameworks reported metrics back was using synchronous XHR. But sync XHR is bad. It halts the main thread on the network. no one should be using it. EVAR! So, what's the alternative to sync XHR? The Beacon API! The beacon API is a simple way to send up a POST with your metrics data and have reasonable guaranties that the browser would send it our even if the user navigated away. The spec also enables browsers to send beacon after a while in case the browser doesn't want to wake up the radio to send up perf metrics. (because waking up the cellular radio is one of the most battery draining operations) We're now discussing adding various capabilities to the beacon API, to enable sending methods other than POST, adding custom headers, etc. Where we'll probably end up is by extending the Fetch API to have Beacon's capabilities: the ability to live beyond the current page's lifetime and the ability to be delayed at the browser's choosing. --- background-image: url(media/future.jpg) # Network Error Logging ??? One thing we rarely know if how many users tried to reach our site but never did? All of our metrics are based on JS so they assume that the user was able to get to our site and run its JS. But what if they never made it? They were on a lousy network and the request never got to our servers? Or that our servers were in the process of collapsing and never responded to the user's requests? Well, a proposed API called Network Error Logging is supposed to give us the ability to get such reports. The user of course needs to reach our site once, and get that reporting point registered, but once they did, we would get reports anytime in the future where they'd try to reach our site, and it won't be there. That particular proposal lost some interest but it might make a comeback one day... --- background-image: url(media/hint.jpg) # Hint ??? And now to my favorite part: hints! hints enable us to tell the browser about things we as developers know and that the browser doesn't. That enables the browser to take these things into account early on, and make our pages faster! --- background-image: url(media/eiffel.jpg) # sizes & srcset ??? I'm not gonna cover the full responsive images solutions here, as it's something I could have done a full talk on. How many of you are familiar with responsive images? (maybe I should have done a responsive images talk?) In short, responsive images are all about sending appropriately dimensioned images to the browser, despite the fact that we have to cater for many device types in different screen sizes and resolutions. As part of that, the sizes and srcset attributes enable us to tell the browser what would be the dimensions of the image displayed in the various breakpoints, as well as give it a list of resources it can choose from, according to its needs. The browser then can download an appropriately dimensioned image, and save the user's BW and time. --- background-image: url(media/future.jpg) # Preconnect ??? Preconnect is another type of hint that enables the developer to tell the browser in advance about a set of hosts that the browser would soon need to connect to. That enables the browser to perform DNS and establish TCP and TLS connections to those hosts, effectively taking these connection times off of the critical path for those resources. --- background-image: url(media/future.jpg) # Preload ??? Preload is taking the same concept a step further and making sure that the browser is aware of resources that it would need later on. That enables the browser not only establish the connection, but also start downloading the resource ahead of time, and according to its priority. --- background-image: url(media/future.jpg) # Client hints ??? A reverse example of hints are a set of standards called client hints. They enable the browser to send hints to the server indicating the browser's environment and conditions so that the server can adapt the resource that it's sending the browser to those condtions. Examples of such hints are the viewport dimensions, image dimensions, screen density, user preferences for data savings, etc. --- # To Conclude --- # Perf APIs are awesome! --- # Fast sites today! --- # More APIs coming soon --- # Thank you --- ## Acknowledgements * Gauges - Ky https://www.flickr.com/photos/61555160@N00/2825311303 * Delorian - William Warby https://www.flickr.com/photos/26782864@N00/9637975771 * Eiffel tower - Olga Díez https://www.flickr.com/photos/20032975@N00/9676216870 * Schedule - Dan Lurie https://www.flickr.com/photos/67396912@N00/440470493 * Report - Brent Leimenstoll https://www.flickr.com/photos/48046976@N08/8417455710 * Hint - https://www.flickr.com/photos/59852454@N03/14140667377