Whether you prefer organizing your browser with tab groups, naming your windows, tab search, or another method, you have lots of features that help you get to the tabs you want. In this The Fast and the Curious post, we describe how we use what windows are visible to you to optimize Chrome, leading to 25.8% faster start up and 4.5% fewer crashes.
Background
For several years, to improve the user experience, Chrome has lowered the priority of background tabs[1]. For example, JavaScript is throttled in background tabs, and these tabs don’t render web content. This reduces CPU, GPU and memory usage, which leaves more memory, CPU and GPU for foreground tabs that the user actually sees. However, the logic was limited to tabs that weren't focused in their window, or windows that were minimized or otherwise moved offscreen.
Through experiments, we found that nearly 20% of Chrome windows are completely covered by other windows, i.e., occluded. If these occluded windows were treated like background tabs, our hypothesis was that we would see significant performance benefits. So, around three years ago, we started working on a project to track the occlusion state of each Chrome window in real time, and lower the priority of tabs in occluded windows. We called this project Native Window Occlusion, because we had to know about the location of native, non-Chrome windows on the user’s screen. (The location information is discarded immediately after it is used in the occlusion calculation.)
Calculating Native Window Occlusion
The Windows OS doesn’t provide a direct way to find out if a window is completely covered by other windows, so Chrome has to figure it out on its own. If we only had to worry about other Chrome windows, this would be simple because we know where Chrome windows are, but we have to consider all the non-Chrome windows a user might have open, and know about anything that happens that might change whether Chrome windows are occluded or not.
There are two main pieces to keeping track of which Chrome windows are occluded. The first is the occlusion calculation, which consists of iterating over the open windows on the desktop, in z-order (front to back) and seeing if the windows in front of a Chrome window completely cover it. The second piece is deciding when to do the occlusion calculation.
Calculating Occlusion
In theory, figuring out which windows are occluded is fairly simple. In practice, however, there are lots of complications, such as multi-monitor setups, virtual desktops, non-opaque windows, and even cloaked windows(!). This needs to be done with great care, because if we decide that a window is occluded when in fact it is visible to the user, then the area where the user expects to see web contents will be white. We also don’t want to block the UI thread while doing the occlusion calculation, because that could reduce the responsiveness of Chrome and degrade the user experience. So, we compute occlusion on a separate thread, as follows:
Ignore minimized windows, since they’re not visible.
Mark Chrome windows on a different virtual desktop as occluded.
Compute the virtual screen rectangle, which combines the display monitors. This is the unoccluded screen rectangle.
Iterate over the open windows on the desktop from front to back, ignoring invisible windows, transparent windows, floating windows (windows with style WS_EX_TOOLBAR), cloaked windows, windows on other virtual desktops, non-rectangular windows[2], etc. Ignoring these kinds of windows may cause some occluded windows to be considered visible (false negatives) but importantly it won’t lead to treating visible windows as occluded (false positives). For each window:
Subtract the window's area from the unoccluded screen rectangle.
If the window is a Chrome window, check if its area overlapped with the unoccluded area. If it didn’t, that means the Chrome window is completely covered by previous windows, so it is occluded.
Keep iterating until all Chrome windows are captured.
At this point, any Chrome window that we haven’t marked occluded is visible, and we’re done computing occlusion. Whew! Now we post a task to the UI thread to update the visibility of the Chrome windows.
This is all done without synchronization locks, so the occlusion calculation has minimal effect on the UI thread, e.g., it will not ever block the UI thread and degrade the user experience.
For more detailed implementation information, see the documentation.
Deciding When to Calculate Occlusion
We don’t want to continuously calculate occlusion because it would degrade the performance of Chrome, so we need to know when a window might become visible or occluded. Fortunately, Windows lets you track various system events, like windows moving or getting resized/maximized/minimized. The occlusion-calculation thread tells Windows that it wants to track those events, and when notified of an event, it examines the event to decide whether to do a new occlusion calculation. Because we may get several events in a very short time, we don’t calculate occlusion more than once every 16 milliseconds, which corresponds to the time a single frame is displayed, assuming a frame rate of 60 frames per second (fps).
Some of the events we listen for are windows getting activated or deactivated, windows moving or resizing, the user locking or unlocking the screen, turning off the monitor, etc. We don’t want to calculate occlusion more than necessary, but we don’t want to miss an event that causes a window to become visible, because if we do, the user will see a white area where their web contents should be. It’s a delicate balance[3].
The events we listen for are focused on whether a Chrome window is occluded. For example, moving the mouse generates a lot of events, and cursors generate an event for every blink, so we ignore events that aren’t for window objects. We also ignore events for most popup windows, so that tooltips getting shown doesn’t trigger an occlusion calculation.
The occlusion thread tells Windows that it wants to know about various Windows events. The UI thread tells Windows that it wants to know when there are major state changes, e.g., the monitor is powered off, or the user locks the screen.
Results
This feature was developed behind an experiment to measure its effect and rolled out to 100% of Chrome Windows users in October 2020 as part of the M86 release. Our metrics show significant performance benefits with the feature turned on:
A reason for the startup and first-contentful-paint improvements is when Chrome restores two or more full-screen windows when starting up, one of the windows is likely to be occluded. Chrome will now skip much of the work for that window, thus saving resources for the more important foreground window.
Posted by David Bienvenu, Chrome Developer
Data source for all statistics: Real-world data anonymously aggregated from Chrome clients.
[1] Note that certain tabs are exempt from having their priority lowered, e.g., tabs playing audio or video.
[2] Non-rectangular windows complicate the calculations and were thought to be rare, but it turns out non-rectangular windows are common on Windows 7, due to some quirks of the default Windows 7 theme.
[3] When this was initially launched, we quickly discovered that Citrix users were getting white windows whenever another user locked their screen, due to Windows sending us session changed notifications for sessions that were not the current session. For the details, look here.
The compiler can either compile this as a loop (smaller), or turn it into five additions in a row (faster, but bigger)
You save the cost of checking the end of the loop and incrementing a variable every time through the loop. But in exchange, you now have many repeated calls to foo(). If foo() is called a lot, that is a lot of space.
And while speed matters a lot, we also care about binary size. (Yes, we see your memes!) And that tradeoff - exchanging speed for memory, and vice versa, holds for a lot of compiler optimizations.
So how do you decide if the cost is worth it? One good way is to optimize for speed in areas that are run often, because your speed wins accumulate each time you run a function. You could just guess at what you inline (your compiler can do this, it's called "a heuristic", it's an educated guess), and then measure speed and code size.
The result: Likely faster. Likely larger. Is that good?
Ask any engineer a question like that, and they will answer “It depends”. So, how do you get an answer?
The More You Know… (profiling & PGO)
The best way to make a decision is with data. We collect data based on what gets run a lot, and what gets run a little. We do that for several different scenarios, because our users do lots of different things with Chrome and want them to be fast.
Our goal is collecting performance data in various scenarios, and using that to guide the compiler. There are 3 steps needed:
Instrument for profiling
Run that instrumented executable in various scenarios
Use the resulting performance profile to guide the compiler.
But we can do more (ThinLTO)
That's a good start, but we can do better. Let's look at inlining - the compiler takes the code of a called function and inserts all of it at the callsite.
inline int foo() { return3; };
int fiver_inline(int num) {
for(int j = 0; j < 5; j++)
num = num + foo();
return num;
}
When the compiler inlines foo(), it turns into
int fiver_inline(int num) {
for(int j = 0; j < 5; j++)
num = num + 3;
return num;
}
Not bad - saves us the function call and all the setup that goes with having a function. But the compiler can in fact even do better - because now all the information is in one place. The compiler can apply that knowledge and deduce that fiver_inline() adds the number three and does so 5 times - and so the entire code is boiled down to
return num + 15;
Which is awesome! But the compiler can only do this if the source code for foo() and the location where it is called are in the same source file - otherwise, the compiler does not know what to inline. (That's the fiver() example). A trivial way around that is to combine all the source files into one giant source file and compile and link that in one go.
There's just one downside - that approach needs to generate all of the machine code for Chrome, all the time. Change one line in one file, compile all of Chrome. And there's a lot of Chrome. It also effectively disables caching build results and so makes remote compilation much less useful. (And we rely a lot on remote compilation & caching so we can quickly build new versions of Chrome.)
So, back to the drawing board. The core insight is that each source file only needs to include a few functions - it doesn't need to see every single other file. All that's needed is "cleverly" mixing the right inline functions into the right source files.
Now we're back to compiling individual source files. Distributed/cached compilation works again, small changes don't cause a full rebuild, and since "ThinLTO" does just inline a few functions, and it is relatively little overhead.
Of course, the question of "which functions should ThinLTO inline?" still needs to be answered. And the answer is still "the ones that are small and called a lot". Hey, we know those already - from the profiles we generated for Profile Guided Optimization (PGO). Talk about lucky coincidences!
But wait, there's more! (Callgraph Sorting)
We've done a lot for inlined function calls. Is there anything we can do to speed up functions that haven't been inlined, too? Turns out there is.
One important factor is that the CPU doesn't fetch data byte by byte, but in chunks. And so, if we could ensure that a chunk of data doesn't just contain the function we need right now, but ideally also the ones that we'll need next, we could ensure that we have to go out and get chunks of data less often.
In other words, we want functions that are called right after the other to live next to each other in memory also ("code locality"). And we already know which functions are called close to each other - because we ran our profiling and stored performance profiles for PGO.
We can then use that information to ensure that the right functions are next to each other when we link.
I.e.
g.c
extern int f1();
extern int f2();
extern int f3();
int g() {
f1();
for(..) {
f3();
}
f1();
f2();
}
could be interpreted as "g() calls f3() a lot - so keep that one really close. f1() is called twice, so… somewhat close. And if we can squeeze in f2, even better". The calling sequence is a "call graph", and so this sorting process is called "call graph sorting".
Just changing the order of functions in memory might not sound like a lot, but it leads to ~3% performance improvement. And to know which functions calls which other ones a lot… yep. You guessed it. Our profiles from the PGO work pay off again.
One more thing.
It turns out that the compiler can make even more use of that profile data for PGO. (Not a surprise - once you know where the slow spots are, exactly, you can do a lot to improve!). To make use of that, and enable further improvements, LLVM has something called the "new pass manager". In a nutshell, it's a new way to run optimizations within LLVM, and it helps a lot with PGO. For much more detail, I'd suggest reading the LLVM blog post.
Turning that on leads to another ~3% performance increase, and ~9MB size reduction.
Why Now?
Good question. One part of that is that PGO & profiling unlock an entire new set of optimizations, as you've seen above. It makes sense to do that all in one go.
The other reason is our toolchain. We used to have a colorful mix of different technologies for compilers and linkers on different platforms.
And since this work requires changes to compilers and linkers, that would mean changing the build - and testing it - across 5 compilers and 4 linkers. But, thankfully, we've simplified our toolchain (Simplicity - another one of the 4S's!). To be able to do this, we worked with the LLVM community to make clang a great Windows compiler, in addition to partnering with the LLVM community to create new ELF (Linux), COFF (Windows), and Mach-O (macOS, iOS) linkers.
And suddenly, it's only a single toolchain to fix. (Almost. LTO for lld on MacOS is being worked on).
Sometimes, the best way to get more speed is not to change the code you wrote, but to change the way you build the software.
Posted by Rachel Blum, Engineering Director, Chrome Desktop
At Chrome, we’re always looking for ways to help users better understand and manage privacy on the web. Our most recent change provides more clarity on controlling site storage settings.
Starting today, we will be rolling out this change to M97 Beta, we will be re-configuring our Privacy and Security settings related to data a site can store (e.g. cookies). Users can now delete all data stored by an individual site by navigating to Settings > Privacy and Security > Site Settings > View permissions and data stored across files, where they’ll land on chrome://settings/content/all. We will be removing the more granular controls found when navigating to Settings > Privacy and Security > Cookies and other site data > See all cookies and site data at chrome://settings/siteData from Settings. This capability remains accessible for developers, the intended audience for this level of granularity, in DevTools.
OLD: We are removing this page. The controls for web-facing storage are now available at chrome://settings/content/all
NEW: Here, in chrome://settings/content/all, users will be able to delete web-facing storage.
Why the change?
We believe that simplifying the granular controls from Settings creates a clearer experience for users. By providing users the ability to delete individual cookies, they can accidentally change the implementation details of the site and potentially break their experience on that site, which can be difficult to predict. Even more capable users run the risk of compromising some of their privacy protection, by incorrectly assuming the purpose of a cookie.
We see this functionality being primarily used by developers, and therefore remain committed to provide them with the tools they need in DevTools. Developers can visit DevTools to continue to gain access to more technical detail on a per-cookie or per-storage level as needed.
Granular cookie controls remain available in DevTools.
As always, we welcome your feedback as we continue to build a more helpful Chrome. Our next step is working to remove this functionality from Page Info to keep all granular cookie controls in DevTools. If you have any other questions or comments on Storage Controls, please share them with us here.
Unless otherwise noted, changes described below apply to the newest Chrome beta channel release for Android, Chrome OS, Linux, macOS, and Windows. Learn more about the features listed here through the provided links. Chrome 97 is beta as of November 18, 2021.
Preparing for a Three Digit Version Number
Next year, Chrome will release version 100. This will add a digit to the version number reported in Chrome's user agent string. To help site owners test for the new string, Chrome 96 introduces a runtime flag that causes Chrome to return '100' in its user agent string. This new flag called chrome://flags/#force-major-version-to-100 is available from Chrome 96 onward. For more information, see Force Chrome major version to 100 in the User-Agent string.
Ideally, we would load the smallest chunk of Java necessary for a process to run. We can get close to this by using Android App Bundles and splitting code into feature modules. Feature modules allow splitting code, resources, and assets into distinct APKs installed alongside the base APK, either on-demand or during app install.
Now, it seems like we have exactly what we want: a feature module could be created for the browser process code, which could be loaded when needed. However, this is not how Android loads feature modules. By default, all installed feature modules are loaded on startup. For an app with a base module and three feature modules “a”, “b”, and “c”, this gives us an Android Context with a ClassLoader that looks something like this:
Having a small minimum set of installed modules that are all immediately loaded at startup is beneficial in some situations. For example, if an app has a large feature that is needed only for a subset of users, the app could avoid installing it entirely for users who don't need it. However, for more commonly used features, having to download a feature at runtime can introduce user friction -- for example, additional latency or challenges if mobile data is unavailable. Ideally we'd be able to have all of our standard modules installed ahead of time, but loaded only when they're actually needed.
Isolated Splits to the Rescue
A few days of spelunking in the Android source code led us to the android:isolatedSplits attribute. If this is set to “true”, each installed split APK will not be loaded during start-up, and instead must be loaded explicitly. This is exactly what we want to allow our processes to use less resources! The ClassLoader illustrated above now looks like this:
In Chrome’s case, the small amount of code needed in the renderer and GPU processes can be kept in the base module, and the browser code and other expensive features can be split into feature modules to be loaded when needed. Using this method, we were able to reduce the .dex size loaded in child processes by 75% to ~2.5MB, making them start faster and use less memory.
This architecture also enabled optimizations for the browser process. We were able to improve startup time by preloading the majority of the browser process code on a background thread while the Application initializes leading to a 7.6% faster load time. By the time an Activity or other component which needed the browser code was launched, it would already be loaded. By optimizing how features are allocated into feature modules, features can be loaded on-demand which saves the memory and loading cost until the feature is used.
Results
Since Chrome shipped with isolated splits in M89 we now have several months of data from the field, and are pleased to share significant improvements in memory usage, startup time, page load speed, and stability for all Chrome on Android users running Android Oreo or later:
Median total memory usage improved by 5.2%
Median renderer process memory usage improved by 7.9%
Median GPU process memory usage improved by 7.6%
Median browser process memory usage improved by 1.2%
95th percentile startup time improved by 7.6%
95th percentile page load speed improved by 2.3%
Large improvements in both browser crash rate and renderer hang rate
Posted by Clark Duvall, Chrome Software Engineer
Data source for all statistics: Real-world data anonymously aggregated from Chrome clients.
The Privacy Sandbox continues to be a cornerstone of our ongoing efforts to collaboratively build privacy-preserving technologies for a healthy web. Our development timeline, which we'll update monthly, shares when developers and advertisers can expect these technologies to be ready for testing and scaled adoption.
This timeline reflects three developmental phases for Privacy Sandbox proposals:
1) Discussion
Dozens of ideas for privacy-preserving technologies have been proposed by Chrome and others, for public discussion in forums such as the W3C and GitHub . For example, more than 100 organizations are helping to refine FLEDGE, a proposal for privacy-preserving remarketing.
2) Testing
Success at this stage depends on developers engaging in hands-on testing then sharing their learnings publicly. Yahoo! JAPAN's analysis of the Attribution Reporting API and Criteo's machine learning competition for evaluating privacy concepts are examples we're grateful for.
This kind of feedback is critical to getting solutions right. For instance, we're currently improving FLoC — a proposal for anonymized interest groups — with insights from companies such as CafeMedia.
3) Scaled Adoption
Some Privacy Sandbox proposals are already live, such as User-Agent Client Hints which are meant to replace the User-Agent (UA) string. We'll start to gradually reduce the granularity of information in the UA string in April 2022. We know implementing these changes take time, so companies will have the option to use the UA string as is through March 2023 via an origin trial.
Stepping up In-Browser Experiences
With Project Fugu, we've been introducing APIs that elevate web apps so they can do anything native apps can. We've also been inspired by brands building more immersive web experiences with Progressive Web Apps (PWAs) and modern APIs.
Take Adobe, a brand we've been partnering with for more than three years. Photoshop, Creative Cloud Spaces, and Creative Cloud Canvas are now in Public Beta and available in browsers—with more flagship apps to follow. This means creatives can view work, share feedback, and make basic edits without having to download or launch native apps.
PWAs have given online video and web conferencing platforms an upgrade too. TikTok found a way to reach video lovers across all devices while YouTube Premium gives people the ability to watch videos offline on laptops and hybrid devices.
Meet drastically improved the audio and video quality in their PWA, and Kapwing focused on making it easy for users to edit videos collaboratively, anytime, anywhere. Zoom replaced their Chrome App with a PWA, and saw 16.9 million new users join web meetings, an increase of more than seven million users year over year.
Developers who want to learn more, or get started with Progressive Web Apps can check out our new Learn PWA course on web.dev. Three modules were launched today, with many more coming.
Continuously Improving Your Web Experience
Measuring site performance is a key part of navigating browsers as they evolve, which is where Core Web Vitals come in. Compared to a year ago, 20% more page visits in Chrome and 60% of the total visits in Chrome fully meet the recommended Core Web Vitals thresholds.
Content management systems, website builders, e-commerce platforms, and JavaScript frameworks have helped push the Web Vitals initiative forward. As we shared in our Core Web Vitals Technology Report, sites built on many of these platforms are hitting Core Web Vitals out of the park:
While this kind of progress is exciting, optimizing for Core Web Vitals can still be challenging. That's why we've been improving our tools to help developers better monitor, measure, and understand site performance. Some of these changes include:
Updates in PageSpeed Insights which make the distinction between "field data" from user experiences and "lab data" from the Lighthouse report more clear.
Capabilities in Lighthouse to audit a complete user flow by loading additional pages and simulating scrolls and link clicks.
Support for user flows, such as a checkout flow, in DevTools with a new Recorder panel for exporting a recorded user journey to Puppeteer script.
We're also experimenting with two new performance metrics: overall input responsiveness and scrolling and animation smoothness. We'd love to get your feedback, so take a spin through at web.dev/responsiveness and web.dev/smoothness.
Expanding the Toolbox for Digital Interfaces
We've got developers and designers covered with tons of changes coming down the pipeline for UI styling and DevTools, including updates to responsive design. Developers can now customize user experiences in a component-driven architecture model, and we're calling this The New Responsive:
With the new container queries spec—available for testing behind a flag in Chrome Canary—developers can access a parent element's width to make styling decisions for its children, nest container queries, and create named queries for easier access and organization.
This is a huge shift for component-based development, so we've been providing new DevTools for debugging, styling, and visualizing CSS layouts. To make creating interfaces even easier, we also launched a collection of off-the-shelf UI patterns.
Developers who want to learn more can dive into free resources such as Learn Responsive Design on web.dev—a collaboration with Clearleft's Jeremy Keith—and six new modules in our Learn CSS course. There are also a few exciting CSS APIs in their first public working drafts, including:
Scroll-timeline for animating an element as people scroll (available via the experimental web platform features flag in Chrome Canary).
Size-adjust property for typography (available in Chromium and Firefox stable).
Accent-color for giving form controls a theme color (available in Chromium and Firefox stable).
One feature we're really excited to build on is Dark Mode, especially because we found indications that dark themes use 11% less battery power than light themes for OLED screens. Stay tuned for a machine-learning-aided, auto-dark algorithm feature in an upcoming version of Chrome.
Buckling Down for the Road Ahead
Part of what makes the web so special is that it's an open, decentralized ecosystem. We encourage everyone to make the most of this by getting involved in shaping the web's future in places such as:
To configure an app to start at login, first right click on it. From the context menu, select ‘Start app when you sign in' and you are all set. Next time when you log in to your device, the app will automatically launch on its own. To disable this feature for an app, navigate to chrome://apps. Right click on the app to bring up the context menu and deselect the option, ‘Start app when you sign in'.
Apps launched through Run on OS Login are launched only after the device is running. ‘Run on OS Login' is a browser only feature and doesn't expose any launch source information to app developers.
We're continuously improving the web platform to provide safe, low friction ways for users to get their day-to-day tasks done. Support for running installed web apps on OS login is a small but significant step to simplifying the startup routine for users that want apps like chat, email, or calendar clients to start as soon as they turn on their computer. As always, we're looking forward to your feedback. Your input will help us prioritize next steps!
Chrome has long-term investments in performance improvement across many projects and we are pleased to share improvements across speed, memory, and unexpected hangs in today’s The Fast and the Curious series post. One in six searches is now as fast as a blink of an eye, Chrome OS browsing now uses up to 20% less memory thanks to our PartitionAlloc investment, and we’ve resolved some thorny Chrome OS and Windows shutdown experiences.
Omnibox
You’ve probably noticed that potential queries are suggested to you as you type when you’re searching the web using Chrome’s omnibox (as long as the “Autocomplete searches and URLs” feature is turned on in Chrome settings.) This makes searching for information faster and easier, as you don’t have to type in the entire search query -- once you’ve entered enough text for the suggestion to be the one you want, you can quickly select it.
Searching in Chrome is now even faster, as search results are prefetched if a suggested query is very likely to be selected. This means that you see the search results more quickly, as they’ve been fetched from the web server before you even select the query. In fact, our experiments found that search results are now 4X more likely to be shown within 500 ms!
Currently, this only happens if Google Search is your default search engine. However, other search providers can trigger this feature by adding information to the query suggestions sent from their servers to Chrome, as described in this article.
Chrome OS PartitionAlloc
Chrome’s new memory allocator, PartitionAlloc, rolled out on Android and Windows in M89, bringing improved memory usage [up to 22% savings] and performance [up to 9% faster responsiveness]. Since then, we have also implemented PartitionAlloc on Linux in M92 and Chrome OS in M93. We are now pleased to announce that M93 field data from Chrome OS shows a total memory footprint reduction of 15% in addition to a 20% browser process memory reduction, improving the Chromebook browsing experience for both single and multi-tabs.
Resolving the #1 shutdown hang
Often software engineers add a cache to a system with the goal of improving performance. But a frequent corollary of caching is that the cache may introduce other problems (code complexity, stability, memory consumption, data consistency), and may even make performance worse. In this case, a local cache was added years ago to Chrome's history system with the goal of making startup faster. The premise at the time, which seemed to bear out in lab testing, was that caching Chrome's internal in-memory history index would be faster than reindexing the history at each startup.
Thanks to our continuing systematic investigation into real-world performance using crash data in conjunction with anonymized performance metrics, we uncovered that not only did this cache add code complexity and unnecessary memory usage, but it was also our top contributor to shutdown hangs in the browser. This is because on some OSes, background priority threads can be starved of I/O indefinitely while there is any other I/O happening elsewhere on the system. Moreover, the performance benefits to our users were minimal, based on analysis of field data. We've now removed the cache and resolved our top shutdown hang. This was a great illustration of the principle that caching is not always the answer!
Stay tuned for many more performance improvements to come!
Posted by Yana Yushkina, Product Manager, Chrome Browser
Data source for all statistics: Real-world data anonymously aggregated from Chrome clients.
Today we're announcing an important update to the previously communicated Chrome app support timeline. Based on feedback from our Enterprise and Education customers and partners, we have made the decision to extend Chrome app support for those users on Chrome OS until at least January 2025.
We continue to invest and have made significant progress in rich new capabilities on the Web platform with Progressive Web Apps (PWA), and we recommend that Chrome app developers migrate to PWAs as soon as possible. PWAs are built and enhanced with modern APIs to deliver enhanced capabilities, reliability, and installability while reaching anyone, anywhere, on any device with a single codebase. There is a growing ecosystem of powerful desktop web apps & PWAs, from advanced graphics products like Adobe Spark to engaging media apps like YouTube TV to productivity and collaboration apps like Zoom.
For additional support with Chrome app migration, please visit our Web apps on Chrome OS page. This page will be kept up to date as we progress together through this process.
We thank our community of developers who have provided feedback to help us shape this modified and simplified approach. We are inspired by a future beyond Chrome apps, where the ecosystem continues forward progress leveraging open Web standards across all modern browsers.