Buck: .buckconfig

.buckconfig

The root of your project must contain a configuration file named .buckconfig. If present, Buck will read this file before executing its business logic so that any customizations specified in .buckconfig will take effect. This file uses the INI file formatwith a few extensions discussed below.

Although the INI format only recognizes strings as values, Buck allows fields to be parsed as a list of strings, separated by a separator character. For example, a field containing command line flags to be passed to a compiler may parse its value as a list of strings, separated by space, so that -foo -bar is parsed as a list of two strings, instead of a single string.

To ensure that any character can be encoded in a .buckconfig value, Buck allows values or part of values to be quoted by surrounding them by double quotes. Inside double quotes, escape sequences can be used to encode characters that would otherwise be problematic to use in values. The following escape sequences are supported:

\\backslash
\"double quote
\nnewline
\rcarriage return
\ttab
\x##unicode character with code point ## (in hex)
\u####unicode character with code point #### (in hex)
\U########unicode character with code point ######## (in hex)

In addition, when a field is parsed as a list instead of a string, the separator character is only interpreted as a separator when it occurs outside double quotes. For example, if flags is a field being interpreted as a list of strings separated by spaces, flags=-foo "-bar \u0429" will result in two strings: foo and -bar Щ.

Finally, other fields' values can be interpolated by including $(config <section>.<field>) inside of a value. For example, if you want to use the go vendor path in a custom setting, you can use:

[custom_section]
custom_value = $(config go.vendor_path)

.buckconfig.local

The root of your project may also contain a second configuration file named .buckconfig.local. Its format is exactly the same as that of .buckconfig, but any definition in .buckconfig.local will override that of .buckconfig. In practice, .buckconfig will be a version-controlled file that contains settings that are applicable to all team members (such as standard includes for build files), whereas .buckconfig.local will be excluded from version control because it contains user-specific overrides (such as personal aliases).

If a configuration option is not found in the project's .buckconfig, it will fall back to a .buckconfig file and .buckconfig.d directory in your home directory. Each of those has the same format as a .buckconfig file but will have any settings overridden by project-specific configurations. If you have build issues, make sure that there is nothing in your .buckconfig or .buckconfig.d that could be conflicting with the project you're trying to build.

Sections

The following sections are recognized by Buck:

[adb]
[alias]
[android]
[apple]
[build]
[buildfile]
[cache]
[client]
[color]
[credentials]
[cxx]
[d]
[doctor]
[download]
[go]
[groovy]
[halide]
[intellij]
[java]
[httpserver]
[log]
[lua]
[maven_repositories]
[ndk]
[parser]
[project]
[python]
[resources]
[resources_per_rule]
[rust]
[test]
[thrift]
[tools]
[ui]
[unknown_flavors_messages]
[worker]

[adb] #

This section configures adb behavior.

adb_restart_on_failure #

This specifies whether to restart adb on failure or not.

[adb]
  adb_restart_on_failure = true

multi_install_mode #

This specifies whether multi-install mode is enabled or disabled by default.

[adb]
  multi_install_mode = false

[alias] #

This section contains definitions of build target aliases.

[alias]
  app     = //apps/myapp:app
  apptest = //apps/myapp:test

These aliases can then be used from the command line:

$ buck build app
$ buck test apptest

You can also suffix aliases with flavors:

$ buck build app#src_jar
# This will expand the alias and effectively build the target returned by:
$ buck targets --resolve-alias app#src_jar
//apps/myapp:app#src_jar

[android] #

This section configures android-specific build behavior.

build_tools_version #

This specifies the version of the Android SDK Build-tools that all Android code in the project should be built against. By default, Buck will select the newest version found on the system.

[android]
  build_tools_version = 23.0.1

target #

This specifies the version of the Android SDK that all Android code in the project should be built against. Even if not specified, the version that Buck chose to use will be printed to the console during the build. A list of valid values on your system can be found by running android list targets --compact.

[android]
  target = Google Inc.:Google APIs:21

[apple] #

This section includes settings that control settings that are specific to Apple platform rules.

xcode_developer_dir #

By default, Buck will use the output of xcode-select --print-path to determine where Xcode's developer directory is. However, you can specify a directory in the config to override whatever value that would return.

[apple]
  xcode_developer_dir = path/to/developer/directory

xcode_developer_dir_for_tests #

Optionally override the Xcode developer directory for running tests, if you want them to be run with a different Xcode version than the version used for building. If absent, falls back to xcode_developer_dir and finally xcode-select --print-path.

[apple]
  xcode_developer_dir_for_tests = path/to/developer/directory/for_tests

target_sdk_version #

For each platform, you can specify the target SDK version to use. The format is {platform}_target_sdk_version.

[apple]
  iphonesimulator_target_sdk_version = 7.0
  iphoneos_target_sdk_version = 7.0
  macosx_target_sdk_version = 10.9

xctool_path #

If you want to run tests with Buck, you will need to get xctool and tell Buck where to find it. This setting lets you specify a path to a binary. You should use either this setting or apple.xctool_zip_target.

[apple]
  xctool_path = path/to/binary/of/xctool

xctool_zip_target #

If you want to run tests with Buck, you will need to get xctool and tell Buck where to find it. This setting lets you specify a build target. You should use either this setting or apple.xctool_path.

[apple]
  xctool_zip_target = //path/to/target/that/creates:xctool-zip

codesign #

To override a default path to codesign, set this setting to either a file path or buck target.

[apple]
  codesign = //path/to/target/that/creates:codesign

test_log #

When running Apple tests via xctool, Buck can set environment variables to tell the tests where to write debug logs and what log level to use. By default, Buck tells xctool to set two environment variables named FB_LOG_DIRECTORYand FB_LOG_LEVEL when running tests which you can read from your test environment:

  FB_LOG_DIRECTORY=buck-out/gen/path/to/logs
  FB_LOG_LEVEL=debug
You can override the default names for these environment variables and the value for the debug log level via the following config settings:

  [apple]
    test_log_directory_environment_variable=MY_LOG_DIRECTORY
    test_log_level_environmant_variable=MY_LOG_LEVEL
    test_log_level=verbose

xctool_default_destination_specifier #

This setting is passed directly to xctool, and then to xcodebuild as the -destination argument. See the man page for the proper syntax.

[apple]
  xctool_default_destination_specifier = platform=iOS Simulator

default_debug_info_format_for_binaries #

default_debug_info_format_for_binaries setting controls the default debug info format that is used when building binary targets. If you don't specify it, DWARF_AND_DSYM value will be used. You can disable debug data by specifying NONE value. You can produce unstripped binary by specifyingDWARF value.

[apple]
  default_debug_info_format_for_binaries = NONE

default_debug_info_format_for_libraries #

default_debug_info_format_for_libraries setting controls the default debug info format that is used when building dynamic library targets. If you don't specify it, DWARF value will be used. You can disable debug data by specifying NONE value. You can produce dSYM file for the library by specifyingDWARF_AND_DSYM value.

[apple]
  default_debug_info_format_for_libraries = DWARF

default_debug_info_format_for_tests #

default_debug_info_format_for_tests setting controls the default debug info format that is used when building test targets. If you don't specify it, DWARF value will be used. You can disable debug data by specifying NONE value. You can produce dSYM file by specifyingDWARF_AND_DSYM value.

[apple]
  default_debug_info_format_for_tests = DWARF_AND_DSYM

device_helper_path #

If you want to have Buck be able to install to devices, you need to provide the path to the fbsimctl binary.

[apple]
  device_helper_path = third-party/fbsimctl/fbsimctl

provisioning_profile_read_command #

Specifies a command with any optional arguments that Buck will use to decode Apple's provisioning profiles for iOS builds. The full path of the provisioning profile will be appended after the command and any arguments specified here. If unspecified, Buck will use openssl smime -inform der -verify -noverify -in.

[apple]
  provisioning_profile_read_command = path/to/command --arg1 --arg2

provisioning_profile_search_path #

Specifies a path where Buck will look for provisioning profiles (files with extension .mobileprovision) that it can use to provision the application to be used on a device. You can specify either an absolute path or one relative to the project root. If unspecified, Buck will look in ~/Library/MobileDevice/Provisioning Profiles.

[apple]
  provisioning_profile_search_path = path/to/provisioning/profiles

code_sign_identities_command #

Specifies a command with any optional arguments that Buck will use to get the current key fingerprints available for code signing. This command should output a list of hashes and common names to standard output in the same format as security find-identity -v -p codesigning. If unspecified, Buck will use security find-identity -v -p codesigning.

[apple]
  code_sign_identities_command = path/to/command --arg1 --arg2

use_header_maps_in_xcode #

Xcode projects generated by Buck by default use header maps for header search paths. This speeds up builds for large projects over using regular directory header search paths, but breaks some Xcode features, like header file name autocompletion. If that is an issue, use the following option to disable the use of header maps.

[apple]
  use_header_maps_in_xcode = false

*_package_command #

Specify a custom command to run for apple_package() rules. The syntax of this field is similar to the cmd field of genrule, and supports some expansions:

SRCS
Expands to the absolute path of the bundle argument output to theapple_package() rule.
OUT
Expands to the output file for the apple_package() rule. The file specified by this variable must always be written by this command.
SDKROOT
Expands to the SDK root directory for the requested SDK. For example,/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS9.2.sdk/.
Note that since strings in the config can be quoted, literal quotes can only be written by quoting the string and use escaped quotes. If omitted, this will revert to the built-in behavior. When this option is specified, *_package_extension must also be specified.

[apple]
  iphoneos_package_command = "\"$PLATFORM_DIR/Developer/usr/bin/PackageApplication\" \"$SRCS\" \"$OUT\""
  iphoneos_package_extension = zip

*_package_extension #

Specify the output extension for custom apple_package rules configured with*_package_command. This config option must be specified when *_package_command is specified, or both omitted.

*_toolchains_override #

Specify a comma-delimited custom list of toolchains to use when building with a particular SDK. This is the Buck equivalent of the TOOLCHAINS environment variable when building with Xcode. If omitted, this will revert to the built-in behavior.

[build] #

This section includes settings that control build engine behavior.

engine #

This has two possible values that change the behavior of how Buck operates when building a build target:

  • shallow (default): only the required transitive dependencies of a build target are materialized locally. Cache hits can result in missing transitive dependencies that are not needed for the final output.
  • deep: ensure that all transitive dependencies of a build target are materialized locally.

[build]
  engine = shallow

depfiles #

Configures the use of dependency files for rules that support them. This is an optimization that is useful when dependencies are over-specified and the rule can dynamically determine the subset of dependencies it actually needs. The possible values are:

  • enabled (default): Use dependency files to avoid unnecessary rebuilds.
  • cache: Use dependency files to avoid unnecessary rebuilds and to store/fetch artifacts to/from the cache.
  • disabled: Do not use dependency files for rebuild detection.

[build]
  depfiles = enabled

max_depfile_cache_entries #

Sets the maximum size of the depfile cache for each input source file. This is only used when setting build.depfiles to cache. An ideal setting for this should be big enough for the working set of all possible header states that a given unchanged source file uses.

[build]
  max_depfile_cache_entries = 256

type #

Sets the type of the build that buck has been built with. This allows buck to distinguish different kinds of builds. When you run ant locally, this will be automatically set to LOCAL_ANT. When you build buck using buck locally, e.g. buck build buck, this will be automatically set to LOCAL_PEX. If you are deploying buck through central deployment system, you may want to set build type to RELEASE_PEX:

buck build buck --config build.type=RELEASE_PEX

Note: this setting does not affect how buck builds other rules. It only affects the way how buck will build buck.

[build]
  type = RELEASE_PEX

threads #

Sets the maximum number of threads to use for building. By default, Buck uses the number of available cores multipled by 1.25.

[build]
  threads = 4

thread_core_ratio #

Sets the maximum number of threads to use for building as a ratio of the number of available cores (e.g. 0.75 on a 4 core machine would limit building to 3 threads, or a value of 1.25 on the same machine would attempt to use 5 threads).

[build]
  thread_core_ratio = 0.75

thread_core_ratio_max_threads #

The maximum number of threads to use when calculating the number of build threads from thread_core_ratio. (e.g. a value of 2 on a 4 core machine would ensure that, at most, 2 threads were used, and value of 10 on a 40 core machine would ensure that, at most, 10 threads were used).

[build]
  thread_core_ratio_max_threads = 10

thread_core_ratio_min_threads #

The minimum number of threads to use when calculating the number of build threads from thread_core_ratio. (e.g. a value of 1 on a 4 core machine would ensure that, at least, 1 thread was used, and value of 4 on a 40 core machine would ensure that, at least, 10 threads were used).

[build]
  thread_core_ratio_min_threads = 1

thread_core_ratio_reserved_cores #

Limit the maximum number of build threads to be the number of detected cores minus this value. (e.g. a value of 1 on a 4 core machine would ensure that, at most, 3 cores were used, and a value of 2 on a 40 core machine would ensure that, at most, 38 cores were used).

[build]
  thread_core_ratio_reserved_cores = 1

network_threads #

The number of threads to be used for network I/O. The default value is number of cores of the machine.

[build]
  network_threads = 8

rule_key_caching #

Enables caching of rule key calculations between builds when using the Buck daemon.

[build]
  rule_key_caching = true

[buildfile] #

This section includes settings that control build file behavior.

includes #

This sets a list of paths to files that will be automatically included by every build file. This is equivalent to calling include_defs() in every build file.

[buildfile]
  includes = //core/DEFS

name #

The name of build files within a project. This defaults to BUCK.

[buildfile]
  name = TARGETS

[cache] #

This section configures build artifact caching, which can be disabled (default), on the filesystem, or in a distributed cache that can be shared among developers. Note that the cache.mode setting determines which other properties, if any, are relevant to the caching configuration; the irrelevant properties are ignored.

mode #

A comma-separated set of caching policies to use. The valid values are:

  • dir (default): Use a directory-based cache on the local filesystem.
  • http: Use an http-based cache.

[cache]
  mode = dir, http

dir #

The directory path relative to the project root that is used for directory-based caching (cache.mode must contain dir). This defaults to buck-out/cache.

[cache]
  dir = buck-cache

dir_max_size #

The maximum cache size for directory-based caching (cache.mode must contain dir). The default size is unlimited.

[cache]
  dir_max_size = 10GB

dir_mode #

Dictates if the cache is readonly, passthrough or readwrite (default) when using directory-based caching (cache.mode must contain dir).

[cache]
  dir_mode = readwrite

dir_cache_names #

A comma-separated list of names used to configure multiple dir caches. The caches will be used serially in the order in which their names are specified here. If an artifact is found further along in the list, an attempt to store it in the caches earlier in the list will be made. In the following example, if the artifact is found in the warm cache, it will not be stored in the local cache. Note: if [cache] dir or [cache] dir_mode are found, then Buck will fall back to single dir cache more and [cache] dir_cache_names will be completely ignored.

[cache]
    mode = dir
    dir_cache_names = warm, local

[cache#warm]
    dir = ~/prefetched_cache
    dir_mode = readonly

[cache#local]
    dir = ~/buck_cache
    dir_mode = readwrite

http_url #

The URL to use to contact the cache when using http-based caching (cache.mode must contain http). Buck communicates with the server using a simple API.

[cache]
  http_url = http://localhost:8080

http_mode #

Dictates if the cache is readonly or readwrite (default) when using http-based caching (cache.mode must contain http).

[cache]
  http_mode = readwrite

http_read_headers #

A semicolon-separated set of HTTP headers to use when reading from the cache when using http-based caching (cache.mode must contain http). The default is no headers.

[cache]
  http_read_headers = User-Agent: buck

http_write_headers #

A semicolon-separated set of HTTP headers to use when writing to the cache when using http-based caching (cache.mode must contain http). The default is no headers.

[cache]
  http_write_headers = Authorization: XXXXXXX; User-Agent: buck

http_timeout_seconds #

Dictates the timeout per connection when using http-based caching (cache.mode must contain http). It will be the default value for http_connect_timeout_seconds, http_read_timeout_seconds, http_write_timeout_seconds if they're not set. The default is 3.

[cache]
  http_timeout_seconds = 3

http_connect_timeout_seconds #

Dictates the timeout on http connect when using http-based caching If the value is not set, it will try to use the value set for http_timeout_seconds then use the default value 3.

[cache]
  http_connect_timeout_seconds = 3

http_read_timeout_seconds #

Dictates the timeout on http writes when using http-based caching If the value is not set, it will try to use the value set for http_timeout_seconds then use the default value 3.

[cache]
  http_read_timeout_seconds = 3

http_write_timeout_seconds #

Dictates the timeout on http reads when using http-based caching If the value is not set, it will try to use the value set for http_timeout_seconds then use the default value 3.

[cache]
  http_write_timeout_seconds = 3

http_max_concurrent_writes #

The numver of writer threads to use to upload to the http cache when using http-based caching (cache.mode must contain http). The default is 1. Note that when using multiple http caches (see below), the writer thread pool is shared between them all.

[cache]
  http_max_concurrent_writes = 1

http_writer_shutdown_timeout_seconds #

The length of time to wait after the build completes for any remaining http cache uploads to complete before forecfully shutting down the writer thread pool when using http-based caching (cache.mode must contain http). The default is 1800 (30 minutes).

[cache]
  http_writer_shutdown_timeout_seconds = 1800

http_error_message_format #

This setting allows for the customization of how http cache errors appear to the user. If the text {cache_name} is present, it will be replaced with the name of the cache. If the text {error_message}, it will be replaced with the error message.

[cache]
  http_error_message_format = The cache named {cache_name} encountered an error: {error_message}

http_max_store_size #

The max size in bytes that an artifact can be to get pushed to an http cache.

[cache]
  http_max_store_size = 5000000

serve_local_cache #

Make the directory-based cache (cache.mode must contain dir) available to other hosts on the network via Buck's HTTP server (enabled under [httpserver]).

[cache]
  serve_local_cache = false

served_local_cache_mode #

Dictates if the cache is readonly (default) or readwrite when cache.serve_local_cache is enabled.

[cache]
  served_local_cache_mode = readwrite

two_level_cache_enabled #

Have the Buck client perform 2-level stores and lookups on the artifacts. Every cache operation consists of 2 steps: content hash-based and RuleKey-based. This makes it easier to reuse locally cached artifacts across different buck versions at the expense of higher latencies in the case where artifacts are not present in the local cache.

[cache]
  two_level_cache_enabled = false

two_level_cache_minimum_size #

When performing a store artifacts smaller than this size will be stored directly, without the content hash redirection.

[cache]
  two_level_cache_minimum_size = 1024

two_level_cache_maximum_size #

When performing a store artifacts bigger than this size will be stored directly, without the content hash redirection.

[cache]
  two_level_cache_maximum_size = 1024

action_graph_cache_check_enabled #

It enables an integrity checking mechanism in the action graph cache that compares the a newly generated action graph with the one already in the cache in the case of a cache hit. If the graphs do not match the build is stopped and the mismatching rules are printed and logged.

[cache]
  action_graph_cache_check_enabled = false

load_balancing_type #

Decides whether the distributed cache connects to a single url or it has a pool of servers and chooses which one to use based on client side load balancing. NOTE: 'slb_*' configs only apply when CLIENT_SLB is enabled.

[cache]
  load_balancing_type = SINGLE_SERVER, CLIENT_SLB

slb_server_pool #

A comma separated list of server URLs of valid servers. The client side loadbalancer will try to pick the best server to connect to for every single connection.

[cache]
  slb_server_pool = http://my.server.one/,http://my.server.two

slb_ping_endpoint #

The client side loadbalancer will use this endpoint to check whether the server is in healthy state or not. It will also be used to measure request latency.

[cache]
  slb_ping_endpoint = /ping.php

slb_health_check_internal_millis #

The timeout in milliseconds between two consecutive client side loadbalancer health checks to the slb_server_pool.

[cache]
  slb_health_check_internal_millis = 1000

slb_timeout_millis #

The connection timeout per health request made to each of the slb_server_pool servers. Any server that fails to respond within this period will be deemed unhealthy and not be used for cache requests.

[cache]
  slb_timeout_millis = 1000

slb_error_check_time_range_millis #

The error rate to each individual server taking part in the slb_server_pool will be measured in the time range/window specified by this config. In different words, 'errors per second' is computed only for the last slb_error_check_time_range_millis.

[cache]
  slb_error_check_time_range_millis = 300000

slb_max_error_percentage #

The max error percentage allowed within the last slb_error_check_time_range_millis that is acceptable to keep a particular server marked as healthy and usable by the loadbalancer. Expects a float value in the interval [0, 1].

[cache]
  slb_max_error_percentage = 0.1

slb_latency_check_time_range_millis #

The latency to each individual server taking part in the slb_server_pool will be measured in the time range/window specified by this config. In different words, 'server latency' is computed only for the last slb_latency_check_time_range_millis.

[cache]
  slb_latency_check_time_range_millis = 300000

slb_max_acceptable_latency_millis #

If the latency of a ping request to a server in slb_server_pool is higher than this, the server is deemed unhealthy and not used for cache operations.

[cache]
  slb_max_acceptable_latency_millis = 1000

[credentials] #

This section configures credentials to be used when fetching from authenticated Maven repositories via HTTPS.

For a repository repo appearing in [maven_repositories], Buck reads the values of repo_user and repo_pass in this section (if present), and passes them to the server using basic access authentication when fetching.

Note that authenticating in this way over plain HTTP connections is disallowed and will result in an error.

[maven_repositories]
  repo = https://example.com/repo
[credentials]
  repo_user = joeuser
  repo_pass = hunter2

[client] #

This section includes settings that provide information about the caller. Although these can be specified in .buckconfig, in practice, they are specified exclusively on the command line:

$ buck --config client.id=tool-making-this-buck-invocation build buck

id #

It is good practice for tools that call Buck to identify themselves via --config client.id=<toolname>. This makes it easier for developers to audit the source of Buck invocations that they did not make directly.

Note that the value of client.id is not factored into a build rule's cache key. It is purely for auditing purposes.

skip-action-graph-cache #

When Buck is run as a daemon, it caches the last Action Graph it used for a build so that if the next build identifies the same set of targets, the [possibly expensive] Action Graph construction step can be avoided. Because only the last Action Graph is cached, it may be costly to interleave a small build job among a series of incremental builds of an expensive rule:

$ buck build //big:expensive-rule            # Initial Action Graph.
$ buck build //big:expensive-rule            # Action Graph is reused.
$ buck build //library#compilation-database  # Evicts costly Action Graph.
$ buck build //big:expensive-rule            # Action Graph is rebuilt.

Although this scenario may sound contrived, it is very common when other tools may also be running buck build in the background. Work done by IDEs and linters frequently fall into this category. In this case, the best practice is to add --config client.skip-action-graph-cache=true for any sort of "one-off" build for which the cost of caching the Action Graph for the new build likely outweights the benefit of evicting the Action Graph from the previous build. As this is commonly the case for tools, this flag is frequently used in concert with --config client.id:

$ buck build //big:expensive-rule            # Initial Action Graph.
$ buck build //big:expensive-rule            # Action Graph is reused.
$ buck build \                               # Cached Graph is unaffected.
    --config client.skip-action-graph-cache=true \
    --config client.id=nuclide \
    //library#compilation-database
$ buck build //big:expensive-rule            # Action Graph is reused.

[color] #

This section configures colored output of Buck.

ui #

Enables (default) or disables colorized output in the terminal.

[color]
  ui = true

[d] #

This section configures how code written in D is compiled.

base_compiler_flags #

Flags to pass to every invocation of the D compiler. This is a space-separated list. It defaults to an empty list.

[d]
  base_compiler_flags = -I/some/path -g -O3

compiler #

Path to the D compiler. If this parameter is not specified, Buck attempts to find the D compiler automatically.

[d]
  compiler = /opt/dmd/bin/dmd

library_path #

Directories to be searched for the D runtime libraries. This is a colon-separated list. If this parameter is not specified, Buck attempts to detect the location of the libraries automatically.

[d]
  library_path = /usr/local/lib:/opt/dmd/lib

linker_flags #

Flags to pass to the linker when linking D code into an executable. This is a space-separated list. If omitted, this value is constructed from d.library_path.

[d]
  linker_flags = "-L/path to phobos" -lphobos2

[download] #

This section configures downloading from the network during buck fetch.

proxy #

Buck will attempt to fetch files from the network, however, if you happen to be behind a] firewall, this may not work correctly. You can supply a proxy when downloading from HTTP[S] servers with these three settings. Valid types for proxy_type are HTTP (default) and SOCKS. These values correspond to Java's Proxy.Type.

[download]
    proxy_host=proxy.example.com
    proxy_port=8080
    proxy_type=HTTP

maven_repo #

If a remote file's URL starts with mvn:, that file (usually a jar) is supposed to come from a maven repo. You can specify the repo to download from here, or by setting one or more repositories in [maven_repositories].

[download]
  maven_repo = https://repo1.maven.org/maven2

max_number_of_retries #

In case buck is unable to download a file, it will retry specified number of times before giving up.

[download]
  max_number_of_retries = 3

[cxx] #

This section configures the paths to the C++ and C toolchains' binaries and the default flags to pass to all invocations of them.

cpp #

The path to the C preprocessor.

[cxx]
  cpp = /usr/bin/gcc

cc #

The path to the C compiler.

[cxx]
  cc = /usr/bin/gcc

ld #

The path to the C/C++ linker driver.

[cxx]
  ld = /usr/bin/g++

linker_platform #

The platform for the linker. Normally this is autodetected based on the system, but it useful to set when cross compiling. Valid values are:

  • MACOS
  • LINUX
  • WINDOWS

[cxx]
  linker_platform = MACOS

cxxpp #

The path to the C++ preprocessor.

[cxx]
  cxxpp = /usr/bin/g++

cxx #

The path to the C++ compiler.

[cxx]
  cxx = /usr/bin/g++

aspp #

The path to the assembly preprocessor.

[cxx]
  aspp = /usr/bin/gcc

as #

The path to the assembler.

[cxx]
  as = /usr/bin/as

ar #

The path to the archiver.

[cxx]
  ar = /usr/bin/ar

archiver_platform #

The platform for the archiver. Normally this is autodetected based on the system, but it useful to set when cross compiling. Valid values are:

  • MACOS
  • LINUX
  • WINDOWS

[cxx]
  archiver_platform = MACOS

cppflags #

The flags to pass to the C preprocessor.

[cxx]
  cppflags = -Wall

cflags #

The flags to pass to the C compiler and preprocessor.

[cxx]
  cflags = -Wall

ldflags #

The flags to pass to the linker.

[cxx]
  ldflags = --strip-all

cxxppflags #

The flags to pass to the C++ preprocessor.

[cxx]
  cxxppflags = -Wall

cxxflags #

The flags to pass to the C++ compiler and preprocessor.

[cxx]
  cxxflags = -Wall

asppflags #

The flags to pass to the assembly preprocessor.

[cxx]
  asppflags = -W

asflags #

The flags to pass to the assembler and assembly preprocessor.

[cxx]
  asflags = -W

arflags #

The flags to pass to the archiver.

[cxx]
  arflags = -X32_64

ranlibflags #

The flags to pass to the archive indexer.

[cxx]
  ranlibflags = --plugin someplugin

gtest_dep #

The build rule to compile the Google Test framework.

[cxx]
  gtest_dep = //third-party/gtest:gtest

If you had your Google Test code in third-party/gtest/, the build file in that directory would look something like this:

cxx_library(
  name = 'gtest',
  srcs = subdir_glob([
    'googletest/src/gtest-all.cc',
    'googlemock/src/gmock-all.cc',
    'googlemock/src/gmock_main.cc',
  ]),
  header_namespace = '',
  exported_headers = subdir_glob([
    ('googletest/include', '**/*.h'),
    ('googlemock/include', '**/*.h'),
  ]),
  headers = subdir_glob([
    ('googletest', 'src/*.cc'),
    ('googletest', 'src/*.h'),
    ('googlemock', 'src/*.cc'),
    ('googlemock', 'src/*.h'),
  ]),
  platform_linker_flags = [
    ('android', []),
    ('', ['-lpthread']),
  ],
  visibility = [
    '//test/...',
  ],
)

untracked_headers #

How to handle header files that get included in a preprocessing step, but which aren't explicitly owned by any dependencies. By default, Buck sandboxes headers into symlink trees, but file relative inclusion and explicit preprocessor flags can still cause untracked headers to get pulled into the build which can break caching.

  • ignore (default): Untracked headers are allowed in the build.
  • warn: Print a warning to the console when an untracked header is used.
  • error: Fail the build when an untracked header is used.

[cxx]
  untracked_headers = error

untracked_headers_whitelist #

A list of regexes which match headers to exempt from untracked header verification.

[cxx]
  untracked_headers_whitelist = /usr/include/.*, /usr/local/include/.*

pch_enabled #

Whether prefix headers used by a cxx_library or other such build rule's prefix_header parameter should be separately precompiled, and used in that rule's build.

If this is disabled, the prefix header is included as-is, without precompilation.

Default is true.

[cxx]
  pch_enabled = false

The number of jobs that each C/C++ link rule consumes when running. By default, this is 1, but this can overriden to change how many link rules can execute in parallel for a given -jvalue. This is useful for builds with large I/O intensive static links where using a lower -j value is undesirable (since it reduces the parallelism for other build rule types).

[cxx]
  link_weight = 3

C/C++ link rules are cached by default. However, static C/C++ link jobs can take up lots of cache space and also get relatively low hit rates, so this config option provides a way to disable caching of all C/C++ link rules in the build.

[cxx]
  cache_links = false

[doctor] #

This section defines variables that are associated with command doctor.

protocol #

The protocol of communication, it can be either simple or json.

[doctor]
  protocol = json

endpoint_url #

The address of the remote endpoint that the request will go. This needs to be defined in order for the command to work.

[doctor]
  endpoint_url = http://localhost:4545

endpoint_timeout_ms #

The timeout in milliseconds before giving up contacting the analysis endpoint.

[doctor]
  endpoint_timeout_ms = 15

endpoint_extra_request_args #

This sections of keys and values is added as parameters to the POST request send to the doctor remote endpoint.

[doctor]
  endpoint_extra_request_args = ref=>1245,token=>42

report_upload_path #

The address of the remote endpoint the report will be uploaded.

[doctor]
  report_upload_path = http://localhost:4546

report_max_size #

The maximum size that the report endpoint can handle before giving up and storing it only locally.

[doctor]
  report_max_size = 512MB

report_timeout_ms #

The timeout in milliseconds before giving up contacting the report endpoint.

[doctor]
  report_timeout_ms = 15

report_max_upload_retries #

Times to try to upload to the report endpoint.

[doctor]
  report_max_upload_retries = 2

report_extra_info_command #

An extra command that the report should and attach the information to the uploaded report.

[doctor]
  report_extra_info_command = /custom/script/to/run.sh

[go] #

This section defines the Go toolchain. By default Buck will try to discovery the Go compiler and linker from the go tool found in your PATH.

root #

If you have a non-standard Go install, you will need to set the Go root. The root should contain pkg and bin directories.

[go]
  root = /opt/golang/libexec

prefix #

For interoperability with the go tool, you may specify a prefix for your default package names.

[go]
  prefix = github.com/facebook/buck

tool #

You can specify the path to find the go tool. This in turn will allow Buck to discover the compiler/linker by default. This defaults to ${go.root}/bin/go.

[go]
  tool = /usr/local/bin/go

compiler #

The full path to the Go compiler. This is normally automatically discovered.

[go]
  compiler = /usr/local/libexec/go/pkg/tool/darwin_amd64/compile

assembler #

The full path to the Go assembler. This is normally automatically discovered.

[go]
  assembler = /usr/local/libexec/go/pkg/tool/darwin_amd64/asm

packer #

The full path to the Go packer. This is normally automatically discovered.

[go]
  packer = /usr/local/libexec/go/pkg/tool/darwin_amd64/pack

linker #

The full path to the Go linker. This is normally automatically discovered.

[go]
  linker = /usr/local/libexec/go/pkg/tool/darwin_amd64/link

vendor_path #

A list of colon (:) separated list of directories to include for including in the importmap for Go dependencies. Packages in these directories are allowed to be imported given just the relative path to the package. This is similar to how 'vendor' directories work. e.g you can use import golang.org/x/net for a package that lives in/golang.org/x/net.

[go]
  vendor_path = third-party/go

[groovy] #

This section configures the Groovy toolchain.

groovy_home #

This defines the value of GROOVY_HOME that Buck should use. If it is not provided, Buck will use the system's GROOVY_HOME by default.

[groovy]
  groovy_home = /path/to/groovy_home

[halide] #

This section configures the Halide platform mappings and toolchain.

target #

This defines the C++ platform flavor to Halide target mapping. Each key should begin with the prefix target_, followed by the flavor name. The corresponding value should be the Halide target string to use when building for that flavor.

[halide]
  target_iphonesimulator-x86_64 = x86-64-osx
  target_iphoneos-arm64         = arm-64-ios

xcode_compile_script #

The optional path to a shell script which should be used for invoking the Halide AOT "compiler" when building projects that include Halide targets in Xcode.

[halide]
  xcode_compile_script = //path/to/script.sh

[intellij] #

This section configures a project generated for IntelliJ IDEA by buck project command.

java_library_sdk_names #

SDK names which should be used in IntelliJ modules generated from java_library rules with non-default source option.

[intellij]
  java_library_sdk_names = 1.6 => Java SDK 1.6, 1.8 => Java SDK 1.8

jdk_name #

IntelliJ project SDK name.

[intellij]
  jdk_name = Java SDK 1.6

jdk_type #

IntelliJ project SDK type.

[intellij]
  jdk_type = Android SDK or JavaSDK

android_module_sdk_type #

Default Android SDK type for android modules.

[intellij]
  android_module_sdk_type = Android SDK

android_module_sdk_name #

Default Android SDK name for android modules.

[intellij]
  android_module_sdk_name = Android API 23 Platform

java_module_sdk_type #

SDK type for Java modules.

[intellij]
  java_module_sdk_type = JavaSDK

java_module_sdk_name #

SDK name for Java modules.

[intellij]
  java_module_sdk_name = 1.8

generated_sources_label_map #

Allows adding folders with generated source code to IntelliJ project. These folders are added when a target has a label specified in this option. In the example below, if target //app/target has label generated_code1 folder buck-out/gen/app/lib/__lib_target1__ will be added to IntelliJ project.

[intellij]
  generated_sources_label_map = generated_code_1 => __%name%_target1__, 
                       generated_code2 => __%name%_target2__

remove_unused_libraries #

Removes unused libraries from .idea/libraries.

[intellij]
  remove_unused_libraries = true

aggregate_android_resource_modules #

Forces buck project to aggregate modules with Android resources. This aggregation is performed only if aggregation mode is not none.

Note: using this type of aggregation disables Android layout editor provided by Android plugin. The layout files can still be edited using the XML editor.

[intellij]
  aggregate_android_resource_modules = true

android_resource_module_aggregation_limit #

The maximum number of targets that can be aggregated into one module with Android resources. This limit is a workaround to avoid a problem when Android plugin cannot operate on modules with a big number of resource folders.

[intellij]
  android_resource_module_aggregation_limit = 1000

[java] #

This section configures the Java toolchain.

src_roots #

The paths to roots of Java code (where a root contains a tree of Java folders where the folder structure mirrors the package structure). This list of paths is comma-delimited. Paths that start with a slash are relative to the root of the project, and all other paths can match a folder anywhere in the tree. In the example below, we match all folders named src, and java and javatests at the root of the project.

[java]
  src_roots = src, /java/, /javatests/

extra_arguments #

A comma-delimited list of flags to pass the Java compiler.

[java]
  extra_arguments = -g

source_level #

The default version of Java for source files. Also defines the project language level in IntelliJ.

[java]
  source_level = 7

target_level #

The default version of Java for generated code.

[java]
  target_level = 7

skip_checking_missing_deps #

Buck will attempt to analyze build failures and suggest dependencies that might not be declared in order to fix the failure. On large projects, this can be slow. This setting disables the check.

[java]
  skip_checking_missing_deps = false

jar_spool_mode #

Specifies how the compiler output to the .jar file should be spooled. The valid modes are:

  • intermediate_to_disk (default): writes the intermediate .class files from the compiler output to disk. They are then packed into a .jar.
  • direct_to_jar: compiler output will be directly written to a .jar file with the intermediate .class files held in memory. The compiler output will still be written to disk if there are any postprocessing commands specified during the build.

[java]
  jar_spool_mode = intermediate_to_disk

[dx] #

This section controls how Buck invokes the dx tool.

max_threads #

The number of thread that will run the dexing steps. Since the dexing steps can use a lot of memory, it might be useful to set this to a lower value to avoid out-of-memory errors.

[dx]
  max_threads = 8

max_heap_size #

This option specifies how much memory is available when running dx out of process.

[dx]
  max_heap_size = 2g

[httpserver] #

Option to enable an experimental web server that presents a UI to explore build data. Note that Buck must be run as a daemon in order for the web server to be available.

port #

This sets the port to use for the web server. There are three possible values:

  • n > 0: For any positive integer, Buck will attempt to make the server available on that port.
  • 0: Buck will find a free port for the server to use and print it out on the command line.
  • -1: Explicitely disables the server.

[httpserver]
  port = 8080

[log] #

This section controls how Buck will log information about builds for later inspection.

max_traces #

Sets the maximum number of Chrome Traces that Buck will create.

[log]
  max_traces = 25

compress_traces #

true if Buck should GZIP the traces, false otherwise.

[log]
  compress_traces = true

machine_readable_logger_enabled #

true if Buck should output to a machine readable log file under name buck-machine-log. Log entries are formatted one per line like < Event type >< space >< Json >.

[log]
  machine_readable_logger_enabled = true

[lua] #

This section defines settings relevant to lua_* rules.

lua #

The path to the Lua interpreter. By default, Buck will search for the binary lua in your PATH.

[lua]
  lua = /usr/bin/lua

cxx_library #

The build target of the Lua C library to use to link a standalone interpreter. By default, Buck will use -llua from the C/C++ linker's default library search path.

[lua]
  cxx_library = //third-party/lua:lua

starter_type #

The method for bootstrapping Lua binaries. By default, native is chosen if the binary contains native libraries and pure is chosen otherwise.

  • pure: The binary bootstrap process uses pure Lua code. This method cannot be used if the binary includes native code.
  • native: The binary bootstrap process links in the Lua C library (specified in lua.cxx_library) to form a standalone native interpreter.

[lua]
  starter_type = pure

native_starter_library #

A C/C++ library to use as a custom starter for Lua binaries which use theNATIVE bootstrap method. The library is expected to define the following function:

#ifdef __cplusplus
extern "C"
#endif
int run_starter(
    int argc,
    const char **argv,
    const char *main_module,
    const char *modules_dir,
    const char *extension_suffix);
Where the arguments are as follows:
  • argc: The number of command-line arguments.
  • argv: The array of command-line arguments.
  • main_module: The name of the binary's main module.
  • modules_dir: The path, relative the binary, to the modules directory.
  • extension_suffix: The suffix used for native libraries (e.g. .so).

[lua]
  native_starter_library = //third-party/lua:starter

extension #

The extension to use for Lua binaries. Defaults to .lex.

[lua]
  extension = .lex

[maven_repositories] #

This section defines the set of maven repositories that Buck can use when attempting to resolve maven artifacts. It takes the form of key value pairs of a short name for the repo and the URL. The URL may either be an HTTP(S) URL, or point to a directory on your local disk.

[maven_repositories]
  central = https://repo1.maven.org/maven2
  m2 = ~/.m2/repository

Note that if you using Buck to talk to Maven and you are using IPv6, you might need to add the following option to your .buckjavaargs file:

-Djava.net.preferIPv6Addresses=true

[ndk] #

This section defines properties to configure building native code against the Android NDK.

ndk_version #

The version of the NDK that Buck should use to build native code. This is searched for in the subfolders defined by the folder found in the ANDROID_NDK_REPOSITORY environment variable.

[ndk]
  ndk_version = r10c

app_platform #

The android platform libraries that the code is targetting. This is equivalent to the APP_TARGET in the NDK build system. The default is android-9.

[ndk]
  app_platform = android-21

cpu_abis #

A comma separated list of the CPU ABIs that this repo supports. Buck will only build NDK code for these ABIs.

[ndk]
  cpu_abis = armv7, x86

compiler #

When compiling cxx_library rules, this specifies the compiler family to use from the NDK. The possible values are:

  • gcc (default): Use the GCC family of compilation tools.
  • clang: Use the Clang family of compilation tools.

[ndk]
  compiler = gcc

gcc_version #

When compiling cxx_library rules, this specifies the version of GCC to use. This will be used regardless of the value in ndk.compiler, as other compiler families still use tools from the GCC toolchain (such as ar). The default value is 4.8.

[ndk]
  gcc_version = 4.8

clang_version #

When compiling cxx_library rules, this specifies the version of Clang to use. The default value is 3.4.

[ndk]
  clang_version = 3.4

cxx_runtime #

When compiling cxx_library rules, this specifies the variant of the C/C++ runtime to use. Possible values are:

  • gabixx
  • gnustl (default)
  • libcxx
  • stlport
  • system

[ndk]
  cxx_runtime = gnustl

[parser] #

This section defines settings for the BUCK parser.

python_interpreter #

The path to the python interpreter to use for parsing. If not specified, the python.interpreter setting is used.

[parser]
  python_interpreter = /usr/bin/python

python_path #

The PYTHONPATH environment variable set for the python interpreter used by the parser to use. By default, this is unset.

[parser]
  python_path = /path1:/path2

[project] #

This section defines project-level settings.

ide #

Buck attempts to figure out the correct IDE to use based on the type of rule (e.g. for apple_library it will generate Xcode workspace), but for cross-platform libraries (like cxx_library) it is not possible. This setting lets you specify the default IDE that buck project generates. Possible values are:

  • intellij
  • xcode

[project]
  ide = xcode

default_android_manifest #

The default manifest file that should be used when a project is being generated for an Android rule (like android_library), but there is no AndroidManifest.xml file in the same directory as the build file the rule is defined in. IDEs for Android projects need a manifest file, and this setting provides a convenient fallback without having boilerplate manifest files all over your project.

[project]
  default_android_manifest = //shared/AndroidManifest.xml

glob_handler #

The glob() handler that Buck will use. The possible values are:

  • python (default): evaluates globs in the Python interpreter while parsing build files.
  • watchman: evaluates the globs with Watchman, which is generally much faster.

[project]
  glob_handler = python

If set to forbid, Buck will disallow symbolic links to source and BUCK files. This allows Buck to enable a number of performance improvements. If set to allow, Buck will silently ignore symlinks.

The default value is warn.

[project]
  allow_symlinks = forbid

build_file_search_method #

If set to watchman, Buck will try to use Watchman (if available) instead of Java filesystem crawls to improve the speed when searching for BUCK files. (This is used for commands like buck project and buck build //path/to/....)

If set to filesystem_crawl, Buck will never try to use Watchman, and will always use Java filesystem crawls (which are much slower than Watchman.

If unset, Buck will try to use Watchman if allow_symlinks is set to forbid.

[project]
  build_file_search_method = watchman

watchman_query_timeout_ms #

When communicating with Watchman, Buck will wait this long for a response. The defaut is 1000 ms.

[project]
  watchman_query_timeout_ms = 1000

initial_targets #

A space-separated list of build targets to run when buck project is executed. This is often a list of genrules whose outputs need to exist in order for an IDE to be ale to build a project without the help of Buck.

[project]
  initial_targets = //java/com/facebook/schema:generate_thrift_jar

ignore #

A comma-separated list of subtrees within the project root which are ignored in the following contexts:

  • Buck daemon filesystem monitoring.
  • Filesystem traversal when searching for tests and BUCK files
  • IntelliJ project indexing
Buck automatically excludes its own output, e.g. buck-out, .buckd, and .idea, as well as the cache directory (see cache.mode), but it makes no assumptions about source control systems.

[project]
  ignore = .git

pre_process #

A script that should be executed before the project files are generated. This should only be used to do some project-specific actions that are reasonably fast. The general advice is keep it free from any logic that modifies generated iml files.

The environment of this script contains the following variables:

    BUCK_PROJECT_TARGETS - whitespace-separated list of input targets.

[project]
  pre_process = scripts/pre_process_buck_project.py

parallel_parsing #

When set to true, Buck will parse your build files in parallel.

[project]
  parallel_parsing = false

parsing_threads #

When project.parallel_parsing is enabled, this specifies the number of threads Buck uses to parse. By default, this is equal to the number of threads Buck uses to build, and will be the minimum of this setting and build.threads.

[project]
  parsing_threads = 2

build_file_import_whitelist #

A comma-separated list that configures which Python modules can be imported in build files.

[project]
  build_file_import_whitelist = math, Foo

[python] #

This section may define settings relevant to python_* rules.

Adding a section with the header [python#flavor] to the .buckconfig will add an alternate python section. This python section will be used instead of [python] if the compilation flavor is invoked by appending #version to a build target. This can be useful if you have Python 2 and Python 3 code in your project and need to differentiate accorindgly (namely by changing the value of python.interpreter. On the command line, to build with [python#py3] rather than [python]:

    $ buck build app#py3
    

interpreter #

The path to the python interpreter to use. By default, Buck will search for this in your PATH.

[python]
  interpreter = /usr/bin/python

library #

The build rule, typically a prebuilt_cxx_library, wrapping the libpython.so that cpp_python_extension rules should build against.

[python]
  library = //third-party/python:python

path_to_pex_executor #

The path to the tool used to run executable Python packages. For self-executing packages, this should just by the shell.

[python]
  path_to_pex_executor = /bin/sh

pex_extension #

The extension to use for executable Python packages.

[python]
  pex_extension = .pex

package_style #

The packaging style to use for python_binary and python_test. Valid values are:

  • inplace: builds exectuables which are only able to run from within the repository. This style of packaging is significantly faster than standalone packages.
  • standalone (default): builds self-contained executable packages that can be run outside of the repository.

[python]
  package_style = standalone

The strategy used for pulling in native dependencies:

  • merged: Native dependencies which are first-order dependencies of python_* rules are linked as full, separate, shared libraries. Transitive native dependencies are statically linked into a single monolithic shared library. This is preferred to reduce the native code size and shared library count.
  • separate (default): Transitive native dependencies are linked as full, separate, shared libraries. This is prefererred for faster build-time speed.

[python]
  native_link_strategy = separate

[resources] #

The settings to control how Buck uses resources to schedule the work. When resource-aware scheduler is enabled, Buck will create more threads in attempt to run resource-independent work in parallel. Number of build threads is still controlled by num_threads option. Buck will also create a number of additional threads that will be used for tasks that don't require CPU: network fetches, disk operations, etc. Total number of threads that Buck will operate is controlled bymanaged_thread_count option, that is, it includes build threads and additional threads.

resource_aware_scheduling_enabled #

When set to true, Buck will attempt to use resource-aware scheduler.

[resources]
  resource_aware_scheduling_enabled = true

managed_thread_count #

Buck will use num_threads threads for CPU intensive tasks (e.g. local building) and it will use managed_thread_count - num_threadsfor other purposes. Thus, managed_thread_count value must be greater or equal to num_threads value. If you don't specify this value, Buck will create built-in number of additional threads which equals to the number of CPU cores on the machine. These additional threads will be used for non-CPU work like networking, disk I/O and etc. But if one of the num_threads threads is free then Buck will probably use it for non-CPU stuff as well.

[resources]
  managed_thread_count = 40

default_cpu_amount #

Amount of CPU resource required by arbitrary job which has no specific setting for its resource amounts. By default is 1 - a single CPU is required for the job to be completed.

[resources]
  default_cpu_amount = 1

default_memory_amount #

Amount of memory resource required by arbitrary job which has no specific setting for its resource amounts. By default is 1 - a single memory resource is required for the job to be completed. A single memory resource is an abstract value, currently it equals to 100 Mb.

[resources]
  default_memory_amount = 1

default_disk_io_amount #

Amount of disk I/O resource required by arbitrary job which has no specific setting for its resource amounts. A single disk resource is an abstract value. Think about it as like SSD can handle 50 parallel disk jobs with weight 1, while HDD can handle only 20. Thus, if job needs to read or write a lot of data, it is better to assign a higher value for its disk I/O amount. This will reduce the risk to have several similar jobs running concurrently and performing huge disk I/O operations, slowing down build itself and system performance.

[resources]
  default_disk_io_amount = 1

default_network_io_amount #

A single network resource is an abstract value. Think about it as Ethernet can handle 50 parallel network jobs with weight 1. Slower network interfaces can handle less amount of jobs. If job needs to send or receive a lot of data, it is better to assign a higher value for its network I/O amount.

[resources]
  default_network_io_amount = 1

max_memory_resource #

Maximum memory resource available to Buck. By default is size of Java heap divided by 100 Mb. A single memory resource is an abstract value, currently it equals to 100 Mb.

[resources]
  max_memory_resource = 30

max_disk_io_resource #

Maximum disk I/O resource available to Buck. By default the value is 50. Think about it as like SSD can handle 50 parallel disk jobs with weight 1, while HDD can handle only 20. Thus, if job needs to read or write a lot of data, it should require higher disk I/O resource.

[resources]
  max_disk_io_resource = 30

max_network_io_resource #

Maximum disk I/O resource available to Buck. By default the value is 30. Think about it as Ethernet can handle 50 parallel network jobs with weight 1. Slower network interfaces can handle less amount of jobs. If job needs to send or receive a lot of data, it should require higher network I/O resource.

[resources]
  max_network_io_resource = 30

[resources_per_rule] #

This section contains required resource amounts for various build rules. If amount for some build rule is not specified in this section, then amount of 1 (CPU), 1 (Memory), 0 (disk i/o) and 0 (network i/o) is used. Amounts are used in local building, so in most cases build rule will require 0 for network I/O unless it fetches any data from network. Rule's name is constructed by converting the camel-style class name of the BuildRulein Buck's source code (e.g. MyBuildRule) into lower underscored name (e.g. my_build_rule).

[resources_per_rule]
  cxx_link = 1, 1, 5, 0
  android_binary = 8, 30, 30, 0
     

Buck will use the defined resource amounts during the build process in order to attempt to use all available resources.

[rust] #

The settings to control how Buck builds rust_* rules.

compiler #

The path that Buck should use to compile Rust files. By default, it checks your PATH.

[rust]
  compiler = /usr/local/bin/rustc

rustc_flags #

Default command-line flags passed to all invocations of the rust compiler.

[rust]
  rustc_flags = -g

rustc_binary_flags #

Default command-line flags passed to invocations of the rust compiler in rust_binary rules, in addition to options set in rustc_flags.

[rust]
  rustc_binary_flags = -C lto

rustc_library_flags #

Default command-line flags passed to invocations of the rust compiler in rust_library rules, in addition to options set in rustc_flags.

[rust]
  rustc_library_flags = --cfg=debug

unflavored_binaries #

Controls whether the output from rust_binary or rust_test rules include a flavor from the platform in the path or not. Even unflavored, the path includes #binary.

[rust]
  unflavored_binaries = true

[test] #

The settings to control how Buck runs tests.

incl_no_location_classes #

This specifies whether jacoco code coverage is enabled on classes without source location. The default is false. Set to true to enable code coverage with robolectric tests. Note that setting to true will include dynamically created sources in code coverage, such as that created by mocking (e.g. jmockit) or persistence frameworks.

[test]
  incl_no_location_classes = true

timeout #

The number of miliseconds per test to allow before stopping the test and reporting a failure. The default is no timeout. Not all *_test rules utilize this value. A JUnit test can override this via the @Test annotation.

[test]
  timeout = 300000

rule_timeout #

The number of milliseconds per *_test rule to allow before stopping it and reporting a failure. The default is no timeout.

[test]
  rule_timeout = 1200000

external_runner #

This specifies an external test runner comamnd to use instead of Buck's built-in test runner. The external test runner is invoked by Buck after it has built all the test rules. It passes the test runner the path to a file which contains a JSON-encoded list of test file infos via the --buck-test-info [path] command line option.

Additionally, if buck test is invoked with -- [extra-runner-args], these will be passed to the external runner before --buck-test-info.

The JSON-encoded test file contains an array of infos. Those infos have the following fields:

  • target: The build target of the test rule.
  • type: A string describing the type of the test.
  • command: An array of command line arguemtns the test runner should invoke to run the test.
  • env: A map of environments variables that should be defined by the test runner when running the test.
  • labels: An array of labels that are defined on the test rule.
  • contacts: An array of contacts that are defined on the test rule. These are typically user names or email addresses.

[test]
  external_runner = command args...

[thrift] #

This section provides settings to locate required thrift components.

compiler #

The path or build target that builds the thrift compiler that Buck should use.

[thrift]
  compiler = /usr/local/bin/thrift

compiler2 #

The path or build target that builds the thrift2 compiler that Buck should use. If this is unset, it defaults to the value of thrift.compiler.

[thrift]
  compiler2 = /usr/local/bin/thrift2

[tools] #

This section tells Buck how to find certain tools e.g. how the Java compilation occurs and how auxiliary tools are used e.g. the ProGuard Java class file optimizer which is used as part of the Android build process.

javac #

The javac option is a path to a program that acts like Java javac. When set, buck uses this program instead of the system Java compiler. When neither this nor tools.java_jar is set, Buck defaults to using the system compiler in-memory.

javac_jar #

When this option is set to a JAR file, Buck loads the referenced compiler in-memory. When neither this nor tools.javac is set, Buck defaults to using the system compiler in-memory.

compiler_class_name #

When javac_jar is set, Buck loads the referenced compiler class name from the jar. When it is not set but javac_jar is set, Buck uses the default compiler class.

proguard #

This option specifies the location of the JAR file to be used to invoke ProGuard. This overrides the default ProGuard JAR file that would have been picked up from the Android SDK. Here is an example setting:

[tools]
  proguard = proguard/proguard-fork.jar

proguard-max-heap-size #

This option specifies how much memory is used when running proguard. Defaults to 1024M. You may want to give ProGuard more memory to try and improve performance.

[tools]
  proguard-max-heap-size = 4096M

proguard-agentpath #

This option allows the specification of a Java profiling agent which is set with the -agentpath argument when the ProGuard jar file is executed. Typically this would be set in a .buckconfig.local configuration file for when you want to profile a build running on your local machine. Set this to the actual path of the installed agent on the machine where ProGuard will run.

[tools]
  proguard-agentpath = /Applications/YourKit_Java_Profiler_2015_build_15084.app/Contents/Resources/bin/mac/libyjpagent.jnilib

[ui] #

This section configures the appearance of Buck's command line interface.

always_sort_threads_by_time #

Specifies whether the lines with information about building and testing threads should always be sorted by the time spent running the rules they are currently executing. When set to false, threads are only sorted if there are more threads than available lines (see ui.thread_line_limit for an option to configure this limit). Only effective when the super console is used. The default value is false.

[ui]
  always_sort_threads_by_time = true

thread_line_limit #

Specifies how many lines will be used to show the status of running threads during building and testing by default. Only effective when the super console is used. The value has to be a positive number. The default value is 10.

[ui]
  thread_line_limit = 10

thread_line_limit_on_warning #

Specifies how many lines will be used to show the status of running threads during building and testing after a warning is reported. Only effective when the super console is used. The value has to be a positive number. Defaults to the value of ui.thread_line_limit.

[ui]
  thread_line_limit_on_warning = 10

thread_line_limit_on_error #

Specifies how many lines will be used to show the status of running threads during building and testing after an error is reported. Only effective when the super console is used. The value has to be a positive number. Defaults to the value of ui.thread_line_limit.

[ui]
  thread_line_limit_on_error = 10

unknown_flavors_messages #

Specify messages for flavors. The message is used in case of an error with the flavors and it can be used to help debugging or provide suggestions for a fix. The name is a common Java Regex pattern like android-* and the value is the message that will be shown. Messages in .buckconfig have priority over Buck's defaults and suggestions will be shown for all the matches.

[unknown_flavors_messages]
  android-* = Make sure you have Android SDK & NDK installed and set up.

[worker] #

This section configures how Buck's workers (worker_tools and similar) work.

persistent #

Specifies whether by default workers run in persistent mode (reusing the worker process across builds). The persistent option of worker_tool overrides this default. The default value is false. Be careful when switching this to true since the workers will not shut down after buck commands and will continue consuming system resources.

[worker]
  persistent = false