There are instructions for other platforms linked from theget the code page.
Are you a Google employee? Seego/building-chrome instead.
python3
must point to a Python v3.9+ binary). Depot_tools bundles an appropriate version of Python in$depot_tools/python-bin
, if you don't have an appropriate version already on your system.depot_tools
currently use Python 3.11. If something is broken with an older Python version, feel free to report or send us fixes.libc++
is currently the only supported STL.clang
is the only officially-supported compiler, though external community members generally keep things building withgcc
. For more details, see thesupported toolchains doc.Most development is done on Ubuntu (Chromium's build infrastructure currently runs 22.04, Jammy Jellyfish). There are some instructions for other distros below, but they are mostly unsupported, but installation instructions can be found inDocker.
depot_tools
Clone thedepot_tools
repository:
$ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
Adddepot_tools
to the beginning of yourPATH
(you will probably want to put this in your~/.bashrc
or~/.zshrc
). Assuming you cloneddepot_tools
to/path/to/depot_tools
:
$export PATH="/path/to/depot_tools:$PATH"
When cloningdepot_tools
to your home directorydo not use~
on PATH, otherwisegclient runhooks
will fail to run. Rather, you should use either$HOME
or the absolute path:
$export PATH="${HOME}/depot_tools:$PATH"
Create achromium
directory for the checkout and change to it (you can call this whatever you like and put it wherever you like, as long as the full path has no spaces):
$ mkdir~/chromium && cd ~/chromium
Run thefetch
tool from depot_tools to check out the code and its dependencies.
$ fetch--nohooks chromium
fetch
won’t work without a Nix shell. Clonethe tools repo withgit
, then runnix-shell tools/nix/shell.nix
.If you don't want the full repo history, you can save a lot of time by adding the--no-history
flag tofetch
.
Expect the command to take 30 minutes on even a fast connection, and many hours on slower ones.
If you've already installed the build dependencies on the machine (from another checkout, for example), you can omit the--nohooks
flag andfetch
will automatically executegclient runhooks
at the end.
Whenfetch
completes, it will have created a hidden.gclient
file and a directory calledsrc
in the working directory. The remaining instructions assume you have switched to thesrc
directory:
$ cd src
Once you have checked out the code, and assuming you're using Ubuntu, runbuild/install-build-deps.sh
$./build/install-build-deps.sh
You may need to adjust the build dependencies for other distros. There are somenotes at the end of this document, but we make no guarantees for their accuracy.
Once you've runinstall-build-deps
at least once, you can now run the Chromium-specific hooks, which will download additional binaries and other things you might need:
$ gclient runhooks
Optional: You can alsoinstall API keys if you want your build to talk to some Google services, but this is not necessary for most development and testing purposes.
Chromium usesSiso as its main build tool along with a tool calledGN to generate.ninja
files. You can create any number ofbuild directories with different configurations. To create a build directory, run:
$ gn genout/Default
Default
with another name, but it should be a subdirectory ofout
.gn help
on the command line or read thequick start guide.This section contains some things you can change to speed up your builds, sorted so that the things that make the biggest difference are first.
Chromium‘s build can be sped up significantly by using a remote execution system compatible withREAPI. This allows you to benefit from remote caching and executing many build actions in parallel on a shared cluster of workers. Chromium’s build uses a client developed by Google calledSiso to remotely execute build actions.
To get started, you need access to an REAPI-compatible backend.
The following instructions assume that you received an invitation from Google to use Chromium's RBE service and were granted access to it. For contributors who havetryjob access , please ask a Googler to emailaccounts@chromium.org on your behalf to access RBE backend paid by Google. Note that remote execution for external contributors is a best-effort process. We do not guarantee when you will be invited.
For others who have no access to Google's RBE backends, you are welcome to use any of theother compatible backends, in which case you will have to adapt the following instructions regarding the authentication method, instance name, etc. to work with your backend.
If you would like to usesiso
with Google‘s RBE, you’ll first need to:
siso login
and login with your authorized account. If it is blocked in OAuth2 flow, rungcloud auth login
instead.Next, you'll have to specify yourrbe_instance
in your.gclient
configuration to use the correct one for Chromium contributors:
solutions = [ { ..., "custom_vars": { # This is the correct instance name for using Chromium's RBE service. # You can only use it if you were granted access to it. If you use your # own REAPI-compatible backend, you will need to change this accordingly # to its requirements. "rbe_instance": "projects/rbe-chromium-untrusted/instances/default_instance", }, },]
And rungclient sync
. This will regenerate the config files inbuild/config/siso/backend_config/backend.star
to use therbe_instance
that you just added to your.gclient
file.
Ifrbe_instance
is not owned by Google, you may need to create your ownbackend.star
. Seebuild/config/siso/backend_config/README.md.
Then, add the following GN args to yourargs.gn
:
use_remoteexec = trueuse_siso = true
Ifargs.gn
containsuse_reclient=true
, drop it or replace it withuse_reclient=false
.
That's it. Remember to always useautoninja
for building Chromium as described below, instead of directly invokingsiso
orninja
.
Reach out tobuild@chromium.org if you have any questions about remote execution usage.
By default GN produces a build with all of the debug assertions enabled (is_debug=true
) and including full debug info (symbol_level=2
). Settingsymbol_level=1
will produce enough information for stack traces, but not line-by-line debugging. Settingsymbol_level=0
will include no debug symbols at all. Either will speed up the build compared to full symbols.
Due to its extensive use of templates, the Blink code produces about half of our debug symbols. If you don‘t ever need to debug Blink, you can set the GN argblink_symbol_level=0
. Similarly, if you don’t need to debug v8 you can improve build speeds by setting the GN argv8_symbol_level=0
.
Icecc is the distributed compiler with a central scheduler to share build load. Currently, many external contributors use it. e.g. Intel, Opera, Samsung (this is not useful if you're using Siso).
In order to useicecc
, set the following GN args:
use_debug_fission=falseis_clang=false
See these links for more on thebundled_binutils limitation, thedebug fission limitation.
Using the system linker may also be necessary when using glibc 2.21 or newer. Seerelated bug.
You can useccache to speed up local builds (again, this is not useful if you're using Siso).
Increase your ccache hit rate by settingCCACHE_BASEDIR
to a parent directory that the working directories all have in common (e.g.,/home/yourusername/development
). Consider usingCCACHE_SLOPPINESS=include_file_mtime
(since if you are using multiple working directories, header times in svn sync'ed portions of your trees will be different - seethe ccache troubleshooting section for additional information). If you use symbolic links from your home directory to get to the local physical disk directory where you keep those working development directories, consider putting
alias cd="cd -P"
in your.bashrc
so that$PWD
orcwd
always refers to a physical, not logical directory (and make sureCCACHE_BASEDIR
also refers to a physical parent).
If you tune ccache correctly, a second working directory that uses a branch tracking trunk and is up to date with trunk and was gclient sync'ed at about the same time should build chrome in about 1/3 the time, and the cache misses as reported byccache -s
should barely increase.
This is especially useful if you usegit-worktree and keep multiple local working directories going at once.
You can use tmpfs for the build output to reduce the amount of disk writes required. I.e. mount tmpfs to the output directory where the build output goes:
As root:
mount -t tmpfs -o size=20G,nr_inodes=40k,mode=1777 tmpfs /path/to/out
Quick and dirty benchmark numbers on a HP Z600 (Intel core i7, 16 cores hyperthreaded, 12 GB RAM)
The Chrome binary contains embedded symbols by default. You can reduce its size by using the Linuxstrip
command to remove this debug information. You can also reduce binary size and turn on all optimizations by enabling official build mode, with the GN argis_official_build = true
.
Build Chromium (the “chrome” target) with Siso or Ninja using the command:
$ autoninja-Cout/Default chrome
(autoninja
is a wrapper that automatically provides optimal values for the arguments passed tosiso
orninja
.)
You can get a list of all of the other build targets from GN by runninggn ls out/Default
from the command line. To compile one, pass the GN label to Siso/Ninja with no preceding “//” (so, for//chrome/test:unit_tests
useautoninja -C out/Default chrome/test:unit_tests
).
Siso/Ninja supports a specialsyntax^
to compile a single object file specifying the source file. For example,autoninja -C out/Default ../../base/logging.cc^
compilesobj/base/base/logging.o
.
In addition tofoo.cc^
, Siso also supportsfoo.h^
syntax to compile the correspondingfoo.o
if it exists.
Once it is built, you can simply run the browser:
$out/Default/chrome
If you're using a remote machine that supports Chrome Remote Desktop, you can add this to your .bashrc / .bash_profile.
if[[-z"${DISPLAY}"]];then# In reality, Chrome Remote Desktop starts with 20 and increases until it# finds an available ID [1]. So this isn't guaranteed to always work, but# should work on the vast majoriy of cases.## [1] https://source.chromium.org/chromium/chromium/src/+/main:remoting/host/linux/linux_me2me_host.py;l=112;drc=464a632e21bcec76c743930d4db8556613e21fd8export DISPLAY=:20fi
This means if you launch Chrome from an SSH session, the UI output will be available in Chrome Remote Desktop.
Tests are split into multiple test targets based on their type and where they exist in the directory structure. To see what target a given unit test or browser test file corresponds to, the following command can be used:
$ gn refsout/Default--testonly=true--type=executable--all chrome/browser/ui/browser_list_unittest.cc//chrome/test:unit_tests
In the example above, the target is unit_tests. The unit_tests binary can be built by running the following command:
$ autoninja-Cout/Default unit_tests
You can run the tests by running the unit_tests binary. You can also limit which tests are run using the--gtest_filter
arg, e.g.:
$out/Default/unit_tests--gtest_filter="BrowserListUnitTest.*"
You can find out more about GoogleTest at itsGitHub page.
To update an existing checkout, you can run
$ git rebase-update$ gclient sync
The first command updates the primary Chromium source repository and rebases any of your local branches on top of tip-of-tree (aka the Git branchorigin/main
). If you don't want to use this script, you can also just usegit pull
or other common Git commands to update the repo.
The second command syncs dependencies to the appropriate versions and re-runs hooks as needed.
If, during the final link stage:
LINK out/Debug/chrome
You get an error like:
collect2: ld terminated with signal 6 Aborted terminate called after throwing an instance of 'std::bad_alloc'collect2: ld terminated with signal 11 [Segmentation fault], core dumped
or:
LLVM ERROR: out of memory
you are probably running out of memory when linking. Youmust use a 64-bit system to build. Try the following build settings (seeGN build configuration for other settings):
is_debug = false
symbol_level = 0
is_component_build = true
vm.max_map_count
value from default (like 65530) to for example 262144. You can run thesudo sysctl -w vm.max_map_count=262144
command to set it in the current session from the shell, or add thevm.max_map_count=262144
to /etc/sysctl.conf to save it permanently.If you want to contribute to the effort toward a Chromium-based browser for Linux, please check out theLinux Development page for more information.
Instead of runninginstall-build-deps.sh
to install build dependencies, run:
$ sudo pacman-S--needed python perl gcc gcc-libs bison flex gperf pkgconfig \nss alsa-lib glib2 gtk3 nspr freetype2 cairo dbus xorg-server-xvfb \xorg-xdpyinfo
For the optional packages on Arch Linux:
php-cgi
is provided withpacman
wdiff
is not in the main repository butdwdiff
is. You can getwdiff
in AUR/yaourt
First install thefile
andlsb-release
commands for the script to run properly:
$ sudo apt-get install file lsb-release
Then invoke install-build-deps.sh with the--no-arm
argument, because the ARM toolchain doesn't exist for this configuration:
$ sudo install-build-deps.sh--no-arm
Instead of runningbuild/install-build-deps.sh
, run:
su-c'yum install git python bzip2 tar pkgconfig atk-devel alsa-lib-devel \bison binutils brlapi-devel bluez-libs-devel bzip2-devel cairo-devel \cups-devel dbus-devel dbus-glib-devel expat-devel fontconfig-devel \freetype-devel gcc-c++ glib2-devel glibc.i686 gperf glib2-devel \gtk3-devel java-1.*.0-openjdk-devel libatomic libcap-devel libffi-devel \libgcc.i686 libjpeg-devel libstdc++.i686 libX11-devel libXScrnSaver-devel \libXtst-devel libxkbcommon-x11-devel ncurses-compat-libs nspr-devel nss-devel \pam-devel pango-devel pciutils-devel pulseaudio-libs-devel zlib.i686 httpd \mod_ssl php php-cli python-psutil wdiff xorg-x11-server-Xvfb'
The fonts needed by Blink's web tests can be obtained by followingthese instructions. For the optional packages:
php-cgi
is provided by thephp-cli
package.sun-java6-fonts
is covered by the instructions linked above.You can just runemerge www-client/chromium
.
To get a shell with the dev environment:
$ nix-shell tools/nix/shell.nix
To run a command in the dev environment:
$ NIX_SHELL_RUN='autoninja -C out/Default chrome' nix-shell tools/nix/shell.nix
To set up clangd with remote indexing support, run the command below, then copy the path into your editor config:
$ NIX_SHELL_RUN='readlink /usr/bin/clangd' nix-shell tools/nix/shell.nix
Usezypper
command to install dependencies:
(openSUSE 11.1 and higher)
sudo zypperin subversion pkg-config python perl bison flex gperf \ mozilla-nss-devel glib2-devel gtk-devel wdiff lighttpd gcc gcc-c++ \ mozilla-nspr mozilla-nspr-devel php5-fastcgi alsa-devel libexpat-devel \ libjpeg-devel libbz2-devel
For 11.0, uselibnspr4-0d
andlibnspr4-dev
instead ofmozilla-nspr
andmozilla-nspr-devel
, and usephp5-cgi
instead ofphp5-fastcgi
.
(openSUSE 11.0)
sudo zypperin subversion pkg-config python perl \ bison flex gperf mozilla-nss-devel glib2-devel gtk-devel \ libnspr4-0d libnspr4-dev wdiff lighttpd gcc gcc-c++ libexpat-devel \ php5-cgi alsa-devel gtk3-devel jpeg-devel
The Ubuntu packagesun-java6-fonts
contains a subset of Java of the fonts used. Since this package requires Java as a prerequisite anyway, we can do the same thing by just installing the equivalent openSUSE Sun Java package:
sudo zypperin java-1_6_0-sun
WebKit is currently hard-linked to the Microsoft fonts. To install these usingzypper
sudo zypperin fetchmsttfonts pullin-msttf-fonts
To make the fonts installed above work, as the paths are hardcoded for Ubuntu, create symlinks to the appropriate locations:
sudo mkdir-p/usr/share/fonts/truetype/msttcorefontssudo ln-s/usr/share/fonts/truetype/arial.ttf/usr/share/fonts/truetype/msttcorefonts/Arial.ttfsudo ln-s/usr/share/fonts/truetype/arialbd.ttf/usr/share/fonts/truetype/msttcorefonts/Arial_Bold.ttfsudo ln-s/usr/share/fonts/truetype/arialbi.ttf/usr/share/fonts/truetype/msttcorefonts/Arial_Bold_Italic.ttfsudo ln-s/usr/share/fonts/truetype/ariali.ttf/usr/share/fonts/truetype/msttcorefonts/Arial_Italic.ttfsudo ln-s/usr/share/fonts/truetype/comic.ttf/usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS.ttfsudo ln-s/usr/share/fonts/truetype/comicbd.ttf/usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS_Bold.ttfsudo ln-s/usr/share/fonts/truetype/cour.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New.ttfsudo ln-s/usr/share/fonts/truetype/courbd.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New_Bold.ttfsudo ln-s/usr/share/fonts/truetype/courbi.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New_Bold_Italic.ttfsudo ln-s/usr/share/fonts/truetype/couri.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New_Italic.ttfsudo ln-s/usr/share/fonts/truetype/impact.ttf/usr/share/fonts/truetype/msttcorefonts/Impact.ttfsudo ln-s/usr/share/fonts/truetype/times.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttfsudo ln-s/usr/share/fonts/truetype/timesbd.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Bold.ttfsudo ln-s/usr/share/fonts/truetype/timesbi.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Bold_Italic.ttfsudo ln-s/usr/share/fonts/truetype/timesi.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Italic.ttfsudo ln-s/usr/share/fonts/truetype/verdana.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana.ttfsudo ln-s/usr/share/fonts/truetype/verdanab.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana_Bold.ttfsudo ln-s/usr/share/fonts/truetype/verdanai.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana_Italic.ttfsudo ln-s/usr/share/fonts/truetype/verdanaz.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana_Bold_Italic.ttf
The Ubuntu packagesun-java6-fonts
contains a subset of Java of the fonts used. Since this package requires Java as a prerequisite anyway, we can do the same thing by just installing the equivalent openSUSE Sun Java package:
sudo zypperin java-1_6_0-sun
WebKit is currently hard-linked to the Microsoft fonts. To install these usingzypper
sudo zypperin fetchmsttfonts pullin-msttf-fonts
To make the fonts installed above work, as the paths are hardcoded for Ubuntu, create symlinks to the appropriate locations:
sudo mkdir-p/usr/share/fonts/truetype/msttcorefontssudo ln-s/usr/share/fonts/truetype/arial.ttf/usr/share/fonts/truetype/msttcorefonts/Arial.ttfsudo ln-s/usr/share/fonts/truetype/arialbd.ttf/usr/share/fonts/truetype/msttcorefonts/Arial_Bold.ttfsudo ln-s/usr/share/fonts/truetype/arialbi.ttf/usr/share/fonts/truetype/msttcorefonts/Arial_Bold_Italic.ttfsudo ln-s/usr/share/fonts/truetype/ariali.ttf/usr/share/fonts/truetype/msttcorefonts/Arial_Italic.ttfsudo ln-s/usr/share/fonts/truetype/comic.ttf/usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS.ttfsudo ln-s/usr/share/fonts/truetype/comicbd.ttf/usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS_Bold.ttfsudo ln-s/usr/share/fonts/truetype/cour.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New.ttfsudo ln-s/usr/share/fonts/truetype/courbd.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New_Bold.ttfsudo ln-s/usr/share/fonts/truetype/courbi.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New_Bold_Italic.ttfsudo ln-s/usr/share/fonts/truetype/couri.ttf/usr/share/fonts/truetype/msttcorefonts/Courier_New_Italic.ttfsudo ln-s/usr/share/fonts/truetype/impact.ttf/usr/share/fonts/truetype/msttcorefonts/Impact.ttfsudo ln-s/usr/share/fonts/truetype/times.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttfsudo ln-s/usr/share/fonts/truetype/timesbd.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Bold.ttfsudo ln-s/usr/share/fonts/truetype/timesbi.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Bold_Italic.ttfsudo ln-s/usr/share/fonts/truetype/timesi.ttf/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Italic.ttfsudo ln-s/usr/share/fonts/truetype/verdana.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana.ttfsudo ln-s/usr/share/fonts/truetype/verdanab.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana_Bold.ttfsudo ln-s/usr/share/fonts/truetype/verdanai.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana_Italic.ttfsudo ln-s/usr/share/fonts/truetype/verdanaz.ttf/usr/share/fonts/truetype/msttcorefonts/Verdana_Bold_Italic.ttf
And then for the Java fonts:
sudo mkdir-p/usr/share/fonts/truetype/ttf-lucidasudo find/usr/lib*/jvm/java-1.6.*-sun-*/jre/lib-iname'*.ttf'-print \-exec ln-s{}/usr/share/fonts/truetype/ttf-lucida \;
While it is not a common setup, Chromium compilation should work from within a Docker container. If you choose to compile from within a container for whatever reason, you will need to make sure that the following tools are available:
curl
git
lsb_release
python3
sudo
file
There may be additional Docker-specific issues during compilation. Seethis bug for additional details on this.
Note:Clone depot_tools first.
/path/to/chromium/
.# Use an official Ubuntu base image with Docker already installedFROM ubuntu:22.04# Set environment variablesENV DEBIAN_FRONTEND=noninteractive# Install Mantatory tools (curl git python3) and optional tools (vim sudo)RUN apt-get update&& \ apt-get install-y curl git lsb-release python3 git file vim sudo&& \ rm-rf/var/lib/apt/lists/*# Export depot_tools pathENV PATH="/depot_tools:${PATH}"# Configure git for safe.directoryRUN git config --global --add safe.directory /depot_tools && \ git config --global --add safe.directory /chromium/src# Set the working directory to the existing Chromium source directory.# This can be either "/chromium/src" or "/chromium".WORKDIR /chromium/src# Expose any necessary ports (if needed)# EXPOSE 8080# Create a dummy user and group to avoid permission issuesRUN groupadd -g 1001 chrom-d && \ useradd -u 1000 -g 1001 -m chrom-d# Create normal user with name "chrom-d". Optional and you can use root but# not advised.USER chrom-d# Start Chromium Builder "chrom-d" (modify this command as needed)# CMD ["autoninja -C out/Default chrome"]CMD ["bash"]
# chrom-b is just a name; You can change it but you must reflect the renaming# in all commands below$ docker build-t chrom-b.
$ docker run-it \# Run docker interactively--name chrom-b \# with name "chrom-b"-u root \# with user root-v/path/on/machine/to/chromium:/chromium \# With chromium folder mounted-v/path/on/machine/to/depot_tools:/depot_tools \# With depot_tools mounted chrom-b# Run container with image name "chrom-b"
#
) to avoid breaking the command../build/install-build-deps.sh
Before running hooks: Ensure that all directories withinthird_party
are added as safe directories in Git. This is required when running in the container because the ownership of thesrc/
directory (e.g.,chrom-b
) differs from the current user (e.g.,root
). To prevent Gitwarnings about “dubious ownership” run the following command after installing the dependencies:
# Loop through each directory in /chromium/src/third_party and add# them as safe directories in Git$for dirin/chromium/src/third_party/*; do if [ -d "$dir" ]; then git config --global --add safe.directory "$dir" fidone
Exit container
Save container image with tag-id namedpv1.0
. Run this on the machine, not in container
# Get docker running/stopped containers, copy the "chrom-b" id$ docker container ls-a# Save/tag running docker container with name "chrom-b" with "dpv1.0"# You can choose any tag name you want but propagate name accordingly# You will need to create new tags when working on different parts of# chromium which requires installing additional dependencies$ docker commit<IDfrom above step> chrom-b:dpv1.0# Optional, just saves space by deleting unnecessary images$ docker image rmi chrom-b:latest&& docker image prune \&& docker container prune&& docker builder prune
$ docker run--rm \# close instance upon exit-it \# Run docker interactively--name chrom-b \# with name "chrom-b"-u $(id-u):$(id-g) \# Run container as a non-root user with same UID & GID-v/path/on/machine/to/chromium:/chromium \# With chromium folder mounted-v/path/on/machine/to/depot_tools:/depot_tools \# With depot_tools mounted chrom-b:dpv1.0# Run container with image name "chrom-b" and tag dpv1.0
#
) to avoid breaking the command.