about summary refs log tree commit diff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/builders/fetchers.chapter.md40
-rw-r--r--doc/builders/images/dockertools.section.md16
-rw-r--r--doc/builders/images/ocitools.section.md6
-rw-r--r--doc/builders/images/snaptools.section.md2
-rw-r--r--doc/builders/packages/citrix.section.md10
-rw-r--r--doc/builders/packages/eclipse.section.md10
-rw-r--r--doc/builders/packages/elm.section.md2
-rw-r--r--doc/builders/packages/emacs.section.md6
-rw-r--r--doc/builders/packages/etc-files.section.md8
-rw-r--r--doc/builders/packages/firefox.section.md27
-rw-r--r--doc/builders/packages/fish.section.md2
-rw-r--r--doc/builders/packages/fuse.section.md4
-rw-r--r--doc/builders/packages/ibus.section.md6
-rw-r--r--doc/builders/packages/linux.section.md14
-rw-r--r--doc/builders/packages/locales.section.md4
-rw-r--r--doc/builders/packages/nginx.section.md4
-rw-r--r--doc/builders/packages/opengl.section.md2
-rw-r--r--doc/builders/packages/shell-helpers.section.md2
-rw-r--r--doc/builders/packages/steam.section.md16
-rw-r--r--doc/builders/packages/urxvt.section.md8
-rw-r--r--doc/builders/packages/weechat.section.md10
-rw-r--r--doc/builders/special.xml1
-rw-r--r--doc/builders/special/fhs-environments.section.md2
-rw-r--r--doc/builders/special/invalidateFetcherByDrvHash.section.md31
-rw-r--r--doc/builders/testers.chapter.md128
-rw-r--r--doc/builders/trivial-builders.chapter.md6
-rw-r--r--doc/contributing/coding-conventions.chapter.md22
-rw-r--r--doc/contributing/reviewing-contributions.chapter.md6
-rw-r--r--doc/contributing/submitting-changes.chapter.md4
-rw-r--r--doc/doc-support/default.nix9
-rw-r--r--doc/functions/library/attrsets.xml2
-rw-r--r--doc/hooks/index.xml10
-rw-r--r--doc/hooks/postgresql-test-hook.section.md59
-rw-r--r--doc/languages-frameworks/chicken.section.md49
-rw-r--r--doc/languages-frameworks/coq.section.md15
-rw-r--r--doc/languages-frameworks/cuda.section.md34
-rw-r--r--doc/languages-frameworks/gnome.section.md18
-rw-r--r--doc/languages-frameworks/go.section.md6
-rw-r--r--doc/languages-frameworks/index.xml2
-rw-r--r--doc/languages-frameworks/javascript.section.md201
-rw-r--r--doc/languages-frameworks/ocaml.section.md10
-rw-r--r--doc/languages-frameworks/php.section.md4
-rw-r--r--doc/languages-frameworks/python.section.md75
-rw-r--r--doc/languages-frameworks/texlive.section.md2
-rw-r--r--doc/languages-frameworks/vim.section.md10
-rw-r--r--doc/manual.xml2
-rw-r--r--doc/stdenv/cross-compilation.chapter.md45
-rw-r--r--doc/stdenv/meta.chapter.md62
-rw-r--r--doc/stdenv/multiple-output.chapter.md2
-rw-r--r--doc/stdenv/stdenv.chapter.md60
-rw-r--r--doc/using/configuration.chapter.md9
-rw-r--r--doc/using/overlays.chapter.md20
-rw-r--r--doc/using/overrides.chapter.md8
53 files changed, 835 insertions, 278 deletions
diff --git a/doc/builders/fetchers.chapter.md b/doc/builders/fetchers.chapter.md
index 28388ba685d8f..70380248f8c65 100644
--- a/doc/builders/fetchers.chapter.md
+++ b/doc/builders/fetchers.chapter.md
@@ -6,11 +6,11 @@ When using Nix, you will frequently need to download source code and other files
 
 Because fixed output derivations are _identified_ by their hash, a common mistake is to update a fetcher's URL or a version parameter, without updating the hash. **This will cause the old contents to be used.** So remember to always invalidate the hash argument.
 
-For those who develop and maintain fetchers, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#sec-pkgs-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
+For those who develop and maintain fetchers, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#tester-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
 
 ## `fetchurl` and `fetchzip` {#fetchurl}
 
-Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
+Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of `fetchurl` is provided below.
 
 ```nix
 { stdenv, fetchurl }:
@@ -24,9 +24,21 @@ stdenv.mkDerivation {
 }
 ```
 
-The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
+The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand, will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
+
+## `fetchpatch` {#fetchpatch}
+
+`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example, it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
+
+- `relative`: Similar to using `git-diff`'s `--relative` flag, only keep changes inside the specified directory, making paths relative to it.
+- `stripLen`: Remove the first `stripLen` components of pathnames in the patch.
+- `extraPrefix`: Prefix pathnames by this string.
+- `excludes`: Exclude files matching these patterns (applies after the above arguments).
+- `includes`: Include only files matching these patterns (applies after the above arguments).
+- `revert`: Revert the patch.
+
+Note that because the checksum is computed after applying these effects, using or modifying these arguments will have no effect unless the `sha256` argument is changed as well.
 
-`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
 
 Most other fetchers return a directory rather than a single file.
 
@@ -38,9 +50,9 @@ Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha25
 
 Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`.
 
-Additionally the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
+Additionally, the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
 
-If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more infomation:
+If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more information:
 
 ```nix
 { stdenv, fetchgit }:
@@ -72,19 +84,23 @@ Used with Mercurial. Expects `url`, `rev`, and `sha256`.
 
 A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below.
 
+## `fetchFromGitea` {#fetchfromgitea}
+
+`fetchFromGitea` expects five arguments. `domain` is the gitea server name. `owner` is a string corresponding to the Gitea user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every Gitea HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
+
 ## `fetchFromGitHub` {#fetchfromgithub}
 
-`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
+`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available, but `sha256` is currently preferred.
 
 `fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options.
 
 ## `fetchFromGitLab` {#fetchfromgitlab}
 
-This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
+This is used with GitLab repositories. The arguments expected are very similar to `fetchFromGitHub` above.
 
 ## `fetchFromGitiles` {#fetchfromgitiles}
 
-This is used with Gitiles repositories. The arguments expected are similar to fetchgit.
+This is used with Gitiles repositories. The arguments expected are similar to `fetchgit`.
 
 ## `fetchFromBitbucket` {#fetchfrombitbucket}
 
@@ -92,11 +108,11 @@ This is used with BitBucket repositories. The arguments expected are very simila
 
 ## `fetchFromSavannah` {#fetchfromsavannah}
 
-This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
+This is used with Savannah repositories. The arguments expected are very similar to `fetchFromGitHub` above.
 
 ## `fetchFromRepoOrCz` {#fetchfromrepoorcz}
 
-This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
+This is used with repo.or.cz repositories. The arguments expected are very similar to `fetchFromGitHub` above.
 
 ## `fetchFromSourcehut` {#fetchfromsourcehut}
 
@@ -107,4 +123,4 @@ or "hg"), `domain` and `fetchSubmodules`.
 
 If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit`
 or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`,
-respectively. Otherwise the fetcher uses `fetchzip`.
+respectively. Otherwise, the fetcher uses `fetchzip`.
diff --git a/doc/builders/images/dockertools.section.md b/doc/builders/images/dockertools.section.md
index 7ff4b2aeb3690..458b0b36720fd 100644
--- a/doc/builders/images/dockertools.section.md
+++ b/doc/builders/images/dockertools.section.md
@@ -58,7 +58,7 @@ After the new layer has been created, its closure (to which `contents`, `config`
 
 At the end of the process, only one new single layer will be produced and added to the resulting image.
 
-The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
+The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`.
 
 It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
 
@@ -87,7 +87,7 @@ pkgs.dockerTools.buildImage {
 }
 ```
 
-and now the Docker CLI will display a reasonable date and sort the images as expected:
+Now the Docker CLI will display a reasonable date and sort the images as expected:
 
 ```ShellSession
 $ docker images
@@ -95,7 +95,7 @@ REPOSITORY   TAG      IMAGE ID       CREATED              SIZE
 hello        latest   de2bf4786de6   About a minute ago   25.2MB
 ```
 
-however, the produced images will not be binary reproducible.
+However, the produced images will not be binary reproducible.
 
 ## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
 
@@ -119,13 +119,13 @@ Create a Docker image with many of the store paths being on their own layer to i
 
 `contents` _optional_
 
-: Top level paths in the container. Either a single derivation, or a list of derivations.
+: Top-level paths in the container. Either a single derivation, or a list of derivations.
 
     *Default:* `[]`
 
 `config` _optional_
 
-: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
+: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
 
     *Default:* `{}`
 
@@ -195,9 +195,9 @@ pkgs.dockerTools.buildLayeredImage {
 
 Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
 
-Modern Docker installations support up to 128 layers, however older versions support as few as 42.
+Modern Docker installations support up to 128 layers, but older versions support as few as 42.
 
-If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
+If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further.
 
 The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration.
 
@@ -213,7 +213,7 @@ The image produced by running the output script can be piped directly into `dock
 $(nix-build) | docker load
 ```
 
-Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
+Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry:
 
 ```ShellSession
 $(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
diff --git a/doc/builders/images/ocitools.section.md b/doc/builders/images/ocitools.section.md
index d3dee57ebac68..d3ab8776786bd 100644
--- a/doc/builders/images/ocitools.section.md
+++ b/doc/builders/images/ocitools.section.md
@@ -1,10 +1,10 @@
 # pkgs.ociTools {#sec-pkgs-ociTools}
 
-`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
+`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that, it makes no assumptions about the container runner you choose to use to run the created container.
 
 ## buildContainer {#ssec-pkgs-ociTools-buildContainer}
 
-This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
+This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory. The nix store of the container will contain all referenced dependencies of the given command.
 
 The parameters of `buildContainer` with an example value are described below:
 
@@ -30,7 +30,7 @@ buildContainer {
 }
 ```
 
-- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container
+- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container.
 
 - `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)
 
diff --git a/doc/builders/images/snaptools.section.md b/doc/builders/images/snaptools.section.md
index 5f710d2de7fe0..259fa1b061808 100644
--- a/doc/builders/images/snaptools.section.md
+++ b/doc/builders/images/snaptools.section.md
@@ -33,7 +33,7 @@ in snapTools.makeSnap {
 
 ## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox}
 
-Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
+Graphical programs require many more integrations with the host. This example uses Firefox as an example because it is one of the most complicated programs we could package.
 
 ``` {#ex-snapTools-buildSnap-firefox .nix}
 let
diff --git a/doc/builders/packages/citrix.section.md b/doc/builders/packages/citrix.section.md
index b25ecb0bdefcb..4721f7e90f7ad 100644
--- a/doc/builders/packages/citrix.section.md
+++ b/doc/builders/packages/citrix.section.md
@@ -4,13 +4,13 @@ The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a
 
 ## Basic usage {#sec-citrix-base}
 
-The tarball archive needs to be downloaded manually as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store the package can be built and installed with Nix.
+The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.
 
-## Citrix Selfservice {#sec-citrix-selfservice}
+## Citrix Self-service {#sec-citrix-selfservice}
 
-The [selfservice](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
+The [self-service](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
 
-In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that you can configure the `selfservice` like this:
+In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that, you can configure the `selfservice` like this:
 
 ```ShellSession
 $ storebrowse -C ~/Downloads/receiverconfig.cr
@@ -19,7 +19,7 @@ $ selfservice
 
 ## Custom certificates {#sec-citrix-custom-certs}
 
-The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
+The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
 
 ```nix
 with import <nixpkgs> { config.allowUnfree = true; };
diff --git a/doc/builders/packages/eclipse.section.md b/doc/builders/packages/eclipse.section.md
index faabb1884501b..8cf7426833b84 100644
--- a/doc/builders/packages/eclipse.section.md
+++ b/doc/builders/packages/eclipse.section.md
@@ -8,9 +8,9 @@ Nixpkgs provides a number of packages that will install Eclipse in its various f
 $ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
 ```
 
-Once an Eclipse variant is installed it can be run using the `eclipse` command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
+Once an Eclipse variant is installed, it can be run using the `eclipse` command, as expected. From within Eclipse, it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
 
-If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add
+If you prefer to install plugins in a more declarative manner, then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add:
 
 ```nix
 packageOverrides = pkgs: {
@@ -22,15 +22,15 @@ packageOverrides = pkgs: {
 }
 ```
 
-to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running
+to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running:
 
 ```ShellSession
 $ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
 ```
 
-If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
+If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument, where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available, then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
 
-Expanding the previous example with two plugins using the above functions we have
+Expanding the previous example with two plugins using the above functions, we have:
 
 ```nix
 packageOverrides = pkgs: {
diff --git a/doc/builders/packages/elm.section.md b/doc/builders/packages/elm.section.md
index ae223c802da4e..063dd73d9de43 100644
--- a/doc/builders/packages/elm.section.md
+++ b/doc/builders/packages/elm.section.md
@@ -1,6 +1,6 @@
 # Elm {#sec-elm}
 
-To start a development environment do
+To start a development environment, run:
 
 ```ShellSession
 nix-shell -p elmPackages.elm elmPackages.elm-format
diff --git a/doc/builders/packages/emacs.section.md b/doc/builders/packages/emacs.section.md
index 577f1a23ce0e9..a202606966c03 100644
--- a/doc/builders/packages/emacs.section.md
+++ b/doc/builders/packages/emacs.section.md
@@ -20,7 +20,7 @@ The Emacs package comes with some extra helpers to make it easier to configure.
 }
 ```
 
-You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
+You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provides a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
 
 ```nix
 {
@@ -101,9 +101,9 @@ You can install it like any other packages via `nix-env -iA myEmacs`. However, t
 }
 ```
 
-This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing `-q` to the Emacs command.
+This provides a fairly full Emacs start file. It will load in addition to the user's personal config. You can always disable it by passing `-q` to the Emacs command.
 
-Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use `overrideScope'`.
+Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control these priorities when some package is installed as a dependency. You can override it on a per-package-basis, providing all the required dependencies manually, but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package, you can use `overrideScope'`.
 
 ```nix
 overrides = self: super: rec {
diff --git a/doc/builders/packages/etc-files.section.md b/doc/builders/packages/etc-files.section.md
index 2405a54634d89..94a769ed33555 100644
--- a/doc/builders/packages/etc-files.section.md
+++ b/doc/builders/packages/etc-files.section.md
@@ -1,10 +1,10 @@
 # /etc files {#etc}
 
-Certain calls in glibc require access to runtime files found in /etc such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
+Certain calls in glibc require access to runtime files found in `/etc` such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
 
-On non-NixOS distributions these files are typically provided by packages (i.e. [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
+On non-NixOS distributions these files are typically provided by packages (i.e., [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
 
-If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your _buildInputs_ then it will set the environment varaibles `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a _setup-hook_.
+If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your `buildInputs`, then it will set the environment variables `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a setup hook.
 
 
 ```bash
@@ -15,4 +15,4 @@ NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/e
 NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols
 ```
 
-Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variable are *not set*, then it will attempt to find the files at the default location within _/etc_.
+Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variables are *not* set, then it will attempt to find the files at the default location within `/etc`.
diff --git a/doc/builders/packages/firefox.section.md b/doc/builders/packages/firefox.section.md
index d6426981da7d7..0dd786a599d0f 100644
--- a/doc/builders/packages/firefox.section.md
+++ b/doc/builders/packages/firefox.section.md
@@ -2,7 +2,7 @@
 
 ## Build wrapped Firefox with extensions and policies {#build-wrapped-firefox-with-extensions-and-policies}
 
-The `wrapFirefox` function allows to pass policies, preferences and extension that are available to Firefox. With the help of `fetchFirefoxAddon` this allows build a Firefox version that already comes with addons pre-installed:
+The `wrapFirefox` function allows to pass policies, preferences and extensions that are available to Firefox. With the help of `fetchFirefoxAddon` this allows to build a Firefox version that already comes with add-ons pre-installed:
 
 ```nix
 {
@@ -26,10 +26,14 @@ The `wrapFirefox` function allows to pass policies, preferences and extension th
         Pocket = false;
         Snippets = false;
       };
-       UserMessaging = {
-         ExtensionRecommendations = false;
-         SkipOnboarding = true;
-       };
+      UserMessaging = {
+        ExtensionRecommendations = false;
+        SkipOnboarding = true;
+      };
+      SecurityDevices = {
+        # Use a proxy module rather than `nixpkgs.config.firefox.smartcardSupport = true`
+        "PKCS#11 Proxy Module" = "${pkgs.p11-kit}/lib/p11-kit-proxy.so";
+      };
     };
 
     extraPrefs = ''
@@ -40,13 +44,12 @@ The `wrapFirefox` function allows to pass policies, preferences and extension th
 }
 ```
 
-If `nixExtensions != null` then all manually installed addons will be uninstalled from your browser profile.
-To view available enterprise policies visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
-or type into the Firefox url bar: `about:policies#documentation`.
-Nix installed addons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded addons are checksumed and manual addons can't be installed. Also make sure that the `name` field of fetchFirefoxAddon is unique. If you remove an addon from the nixExtensions array, rebuild and start Firefox the removed addon will be completly removed with all of its settings.
+If `nixExtensions != null`, then all manually installed add-ons will be uninstalled from your browser profile.
+To view available enterprise policies, visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
+or type into the Firefox URL bar: `about:policies#documentation`.
+Nix installed add-ons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded add-ons are checksummed and manual add-ons can't be installed. Also, make sure that the `name` field of `fetchFirefoxAddon` is unique. If you remove an add-on from the `nixExtensions` array, rebuild and start Firefox: the removed add-on will be completely removed with all of its settings.
 
 ## Troubleshooting {#sec-firefox-troubleshooting}
-If addons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for addons thus nix addons get disabled by the normal Firefox binary.
-
-If addons do not appear installed although they have been defined in your nix configuration file reset the local addon state of your Firefox profile by clicking `help -> restart with addons disabled -> restart -> refresh firefox`. This can happen if you switch from manual addon mode to nix addon mode and then back to manual mode and then again to nix addon mode.
+If add-ons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for add-ons thus nix add-ons get disabled by the normal Firefox binary.
 
+If add-ons do not appear installed despite being defined in your nix configuration file, reset the local add-on state of your Firefox profile by clicking `Help -> More Troubleshooting Information -> Refresh Firefox`. This can happen if you switch from manual add-on mode to nix add-on mode and then back to manual mode and then again to nix add-on mode.
diff --git a/doc/builders/packages/fish.section.md b/doc/builders/packages/fish.section.md
index 3086bd68348f6..85b57acd1090f 100644
--- a/doc/builders/packages/fish.section.md
+++ b/doc/builders/packages/fish.section.md
@@ -36,7 +36,7 @@ using `buildFishPlugin` and running unit tests with the `fishtape` test runner.
 ## Fish wrapper {#sec-fish-wrapper}
 
 The `wrapFish` package is a wrapper around Fish which can be used to create
-Fish shells initialised with some plugins as well as completions, configuration
+Fish shells initialized with some plugins as well as completions, configuration
 snippets and functions sourced from the given paths. This provides a convenient
 way to test Fish plugins and scripts without having to alter the environment.
 
diff --git a/doc/builders/packages/fuse.section.md b/doc/builders/packages/fuse.section.md
index eb0023fcbc3ee..6deea6b5626ed 100644
--- a/doc/builders/packages/fuse.section.md
+++ b/doc/builders/packages/fuse.section.md
@@ -24,10 +24,10 @@ packages on macOS:
     checking for fuse.h... no
     configure: error: No fuse.h found.
 
-This happens on autoconf based projects that uses `AC_CHECK_HEADERS` or
+This happens on autoconf based projects that use `AC_CHECK_HEADERS` or
 `AC_CHECK_LIBS` to detect libfuse, and will occur even when the `fuse` package
 is included in `buildInputs`. It happens because libfuse headers throw an error
-on macOS if the `FUSE_USE_VERSION` macro is undefined. Many proejcts do define
+on macOS if the `FUSE_USE_VERSION` macro is undefined. Many projects do define
 `FUSE_USE_VERSION`, but only inside C source files. This results in the above
 error at configure time because the configure script would attempt to compile
 sample FUSE programs without defining `FUSE_USE_VERSION`.
diff --git a/doc/builders/packages/ibus.section.md b/doc/builders/packages/ibus.section.md
index 2ce85467bb861..1b09d3fbbab95 100644
--- a/doc/builders/packages/ibus.section.md
+++ b/doc/builders/packages/ibus.section.md
@@ -6,7 +6,7 @@ This package is an ibus-based completion method to speed up typing.
 
 IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html).
 
-On NixOS you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
+On NixOS, you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
 
 ```nix
 { pkgs, ... }: {
@@ -19,7 +19,7 @@ On NixOS you need to explicitly enable `ibus` with given engines before customiz
 
 ## Using custom hunspell dictionaries {#sec-ibus-typing-booster-customize-hunspell}
 
-The IBus engine is based on `hunspell` to support completion in many languages. By default the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
+The IBus engine is based on `hunspell` to support completion in many languages. By default, the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
 
 ```nix
 ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; }
@@ -31,7 +31,7 @@ _Note: each language passed to `langs` must be an attribute name in `pkgs.hunspe
 
 The `ibus-engines.typing-booster` package contains a program named `emoji-picker`. To display all emojis correctly, a special font such as `noto-fonts-emoji` is needed:
 
-On NixOS it can be installed using the following expression:
+On NixOS, it can be installed using the following expression:
 
 ```nix
 { pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; }
diff --git a/doc/builders/packages/linux.section.md b/doc/builders/packages/linux.section.md
index f669c720710c8..b64da85791a0d 100644
--- a/doc/builders/packages/linux.section.md
+++ b/doc/builders/packages/linux.section.md
@@ -4,7 +4,7 @@ The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/ke
 
 The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernel’s `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
 
-The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package:
+The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package:
 
 ```nix
 modulesTree = [kernel]
@@ -14,19 +14,19 @@ modulesTree = [kernel]
 
 How to add a new (major) version of the Linux kernel to Nixpkgs:
 
-1.  Copy the old Nix expression (e.g. `linux-2.6.21.nix`) to the new one (e.g. `linux-2.6.22.nix`) and update it.
+1.  Copy the old Nix expression (e.g., `linux-2.6.21.nix`) to the new one (e.g., `linux-2.6.22.nix`) and update it.
 
 2.  Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
 
 3.  Now we’re going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
 
-    1.  Make an copy from the old config (e.g. `config-2.6.21-i686-smp`) to the new one (e.g. `config-2.6.22-i686-smp`).
+    1.  Make a copy from the old config (e.g., `config-2.6.21-i686-smp`) to the new one (e.g., `config-2.6.22-i686-smp`).
 
-    2.  Copy the config file for this platform (e.g. `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
+    2.  Copy the config file for this platform (e.g., `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
 
-    3.  Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e. don’t enable some feature on `i686` and disable it on `x86_64`).
+    3.  Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e., don’t enable some feature on `i686` and disable it on `x86_64`).
 
-    4.  If needed you can also run `make menuconfig`:
+    4.  If needed, you can also run `make menuconfig`:
 
         ```ShellSession
         $ nix-env -f "<nixpkgs>" -iA ncurses
@@ -34,7 +34,7 @@ How to add a new (major) version of the Linux kernel to Nixpkgs:
         $ make menuconfig ARCH=arch
         ```
 
-    5.  Copy `.config` over the new config file (e.g. `config-2.6.22-i686-smp`).
+    5.  Copy `.config` over the new config file (e.g., `config-2.6.22-i686-smp`).
 
 4.  Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
 
diff --git a/doc/builders/packages/locales.section.md b/doc/builders/packages/locales.section.md
index e5a0370048183..3a983f13a396e 100644
--- a/doc/builders/packages/locales.section.md
+++ b/doc/builders/packages/locales.section.md
@@ -1,5 +1,5 @@
 # Locales {#locales}
 
-To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
+To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats, Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
 
-On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.
+On non-NixOS distributions, this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.
diff --git a/doc/builders/packages/nginx.section.md b/doc/builders/packages/nginx.section.md
index 154c21f9b3696..0704b534e5f72 100644
--- a/doc/builders/packages/nginx.section.md
+++ b/doc/builders/packages/nginx.section.md
@@ -4,8 +4,8 @@
 
 ## ETags on static files served from the Nix store {#sec-nginx-etag}
 
-HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility).
+HTTP has a couple of different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store because all file timestamps are set to 0 (for reasons related to build reproducibility).
 
-Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
+Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g., a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
 
 As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of `/nix/store`, the hash in the store path is used as the `ETag` header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.
diff --git a/doc/builders/packages/opengl.section.md b/doc/builders/packages/opengl.section.md
index ee7f3af98cfc4..f4d282267a079 100644
--- a/doc/builders/packages/opengl.section.md
+++ b/doc/builders/packages/opengl.section.md
@@ -12,4 +12,4 @@ The NixOS desktop or other non-headless configurations are the primary target fo
 
 If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
 
-For proprietary video drivers you might have luck with also adding the corresponding video driver package.
+For proprietary video drivers, you might have luck with also adding the corresponding video driver package.
diff --git a/doc/builders/packages/shell-helpers.section.md b/doc/builders/packages/shell-helpers.section.md
index 57b8619c50078..e7c2b0abebfca 100644
--- a/doc/builders/packages/shell-helpers.section.md
+++ b/doc/builders/packages/shell-helpers.section.md
@@ -4,7 +4,7 @@ Some packages provide the shell integration to be more useful. But unlike other
 
 - `fzf` : `fzf-share`
 
-E.g. `fzf` can then used in the `.bashrc` like this:
+E.g. `fzf` can then be used in the `.bashrc` like this:
 
 ```bash
 source "$(fzf-share)/completion.bash"
diff --git a/doc/builders/packages/steam.section.md b/doc/builders/packages/steam.section.md
index 3ce33c9b60ef2..25728aa52aef0 100644
--- a/doc/builders/packages/steam.section.md
+++ b/doc/builders/packages/steam.section.md
@@ -2,20 +2,20 @@
 
 ## Steam in Nix {#sec-steam-nix}
 
-Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in \$HOME.
+Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in `$HOME`.
 
 Nix problems and constraints:
 
-- We don't have `/bin/bash` and many scripts point there. Similarly for `/usr/bin/python`.
+- We don't have `/bin/bash` and many scripts point there. Same thing for `/usr/bin/python`.
 - We don't have the dynamic loader in `/lib`.
-- The `steam.sh` script in \$HOME can not be patched, as it is checked and rewritten by steam.
+- The `steam.sh` script in `$HOME` cannot be patched, as it is checked and rewritten by steam.
 - The steam binary cannot be patched, it's also checked.
 
 The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
 
 ## How to play {#sec-steam-play}
 
-Use `programs.steam.enable = true;` if you want to add steam to systemPackages and also enable a few workarrounds aswell as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pr.
+Use `programs.steam.enable = true;` if you want to add steam to `systemPackages` and also enable a few workarounds as well as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro Controller.
 
 ## Troubleshooting {#sec-steam-troub}
 
@@ -32,7 +32,7 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a
 - **Using the FOSS Radeon or nouveau (nvidia) drivers**
 
   - The `newStdcpp` parameter was removed since NixOS 17.09 and should not be needed anymore.
-  - Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
+  - Steam ships statically linked with a version of `libcrypto` that conflicts with the one dynamically loaded by radeonsi_dri.so. If you get the error:
 
     ```
     steam.sh: line 713: 7842 Segmentation fault (core dumped)
@@ -42,13 +42,13 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a
 
 - **Java**
 
-  1. There is no java in steam chrootenv by default. If you get a message like
+  1. There is no java in steam chrootenv by default. If you get a message like:
 
     ```
     /home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
     ```
 
-    you need to add
+    you need to add:
 
     ```nix
     steam.override { withJava = true; };
@@ -56,7 +56,7 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a
 
 ## steam-run {#sec-steam-run}
 
-The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with
+The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with:
 
 ```
 steam-run ./foo
diff --git a/doc/builders/packages/urxvt.section.md b/doc/builders/packages/urxvt.section.md
index 2d1196d92278e..507feaa6fd861 100644
--- a/doc/builders/packages/urxvt.section.md
+++ b/doc/builders/packages/urxvt.section.md
@@ -4,7 +4,7 @@ Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
 
 ## Configuring urxvt {#sec-urxvt-conf}
 
-In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as
+In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as:
 
 ```nix
 rxvt-unicode.override {
@@ -58,14 +58,14 @@ rxvt-unicode.override {
 
 ## Packaging urxvt plugins {#sec-urxvt-pkg}
 
-Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
+Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin, create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
 
 A plugin can be any kind of derivation, the only requirement is that it should always install perl scripts in `$out/lib/urxvt/perl`. Look for existing plugins for examples.
 
-If the plugin is itself a perl package that needs to be imported from other plugins or scripts, add the following passthrough:
+If the plugin is itself a Perl package that needs to be imported from other plugins or scripts, add the following passthrough:
 
 ```nix
 passthru.perlPackages = [ "self" ];
 ```
 
-This will make the urxvt wrapper pick up the dependency and set up the perl path accordingly.
+This will make the urxvt wrapper pick up the dependency and set up the Perl path accordingly.
diff --git a/doc/builders/packages/weechat.section.md b/doc/builders/packages/weechat.section.md
index e4e956b908edf..767cc604ab459 100644
--- a/doc/builders/packages/weechat.section.md
+++ b/doc/builders/packages/weechat.section.md
@@ -1,6 +1,6 @@
-# Weechat {#sec-weechat}
+# WeeChat {#sec-weechat}
 
-Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
+WeeChat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration, such as:
 
 ```nix
 weechat.override {configure = {availablePlugins, ...}: {
@@ -13,7 +13,7 @@ If the `configure` function returns an attrset without the `plugins` attribute,
 
 The plugins currently available are `python`, `perl`, `ruby`, `guile`, `tcl` and `lua`.
 
-The python and perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
+The Python and Perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
 
 ```nix
 weechat.override { configure = {availablePlugins, ...}: {
@@ -49,7 +49,7 @@ weechat.override {
 
 Further values can be added to the list of commands when running `weechat --run-command "your-commands"`.
 
-Additionally it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
+Additionally, it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
 
 ```nix
 weechat.override {
@@ -64,7 +64,7 @@ weechat.override {
 }
 ```
 
-In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in `$out/share`. An exemplary derivation looks like this:
+In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute, which contains a list of all scripts inside the store path. Furthermore, all scripts have to live in `$out/share`. An exemplary derivation looks like this:
 
 ```nix
 { stdenv, fetchurl }:
diff --git a/doc/builders/special.xml b/doc/builders/special.xml
index 2f84599cdd4f3..8902ce5c81329 100644
--- a/doc/builders/special.xml
+++ b/doc/builders/special.xml
@@ -7,5 +7,4 @@
  </para>
  <xi:include href="special/fhs-environments.section.xml" />
  <xi:include href="special/mkshell.section.xml" />
- <xi:include href="special/invalidateFetcherByDrvHash.section.xml" />
 </chapter>
diff --git a/doc/builders/special/fhs-environments.section.md b/doc/builders/special/fhs-environments.section.md
index 43dc99b7c18fc..cacad261e28ff 100644
--- a/doc/builders/special/fhs-environments.section.md
+++ b/doc/builders/special/fhs-environments.section.md
@@ -45,3 +45,5 @@ One can create a simple environment using a `shell.nix` like that:
 ```
 
 Running `nix-shell` would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change `runScript` to the application path, e.g. `./bin/start.sh` -- relative paths are supported.
+
+Additionally, the FHS builder links all relocated gsettings-schemas (the glib setup-hook moves them to `share/gsettings-schemas/${name}/glib-2.0/schemas`) to their standard FHS location. This means you don't need to wrap binaries with `wrapGAppsHook`.
diff --git a/doc/builders/special/invalidateFetcherByDrvHash.section.md b/doc/builders/special/invalidateFetcherByDrvHash.section.md
deleted file mode 100644
index 7c2f03a64b7b3..0000000000000
--- a/doc/builders/special/invalidateFetcherByDrvHash.section.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-## `invalidateFetcherByDrvHash` {#sec-pkgs-invalidateFetcherByDrvHash}
-
-Use the derivation hash to invalidate the output via name, for testing.
-
-Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
-
-Normally, fixed output derivations can and should be cached by their output
-hash only, but for testing we want to re-fetch everytime the fetcher changes.
-
-Changes to the fetcher become apparent in the drvPath, which is a hash of
-how to fetch, rather than a fixed store path.
-By inserting this hash into the name, we can make sure to re-run the fetcher
-every time the fetcher changes.
-
-This relies on the assumption that Nix isn't clever enough to reuse its
-database of local store contents to optimize fetching.
-
-You might notice that the "salted" name derives from the normal invocation,
-not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
-function twice: once to get a derivation hash, and again to produce the final
-fixed output derivation.
-
-Example:
-
-    tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
-      name = "nix-source";
-      url = "https://github.com/NixOS/nix";
-      rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
-      sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
-    };
diff --git a/doc/builders/testers.chapter.md b/doc/builders/testers.chapter.md
new file mode 100644
index 0000000000000..c6fb71de01807
--- /dev/null
+++ b/doc/builders/testers.chapter.md
@@ -0,0 +1,128 @@
+# Testers {#chap-testers}
+This chapter describes several testing builders which are available in the <literal>testers</literal> namespace.
+
+## `testVersion` {#tester-testVersion}
+
+Checks the command output contains the specified version
+
+Although simplistic, this test assures that the main program
+can run. While there's no substitute for a real test case,
+it does catch dynamic linking errors and such. It also provides
+some protection against accidentally building the wrong version,
+for example when using an 'old' hash in a fixed-output derivation.
+
+Examples:
+
+```nix
+passthru.tests.version = testVersion { package = hello; };
+
+passthru.tests.version = testVersion {
+  package = seaweedfs;
+  command = "weed version";
+};
+
+passthru.tests.version = testVersion {
+  package = key;
+  command = "KeY --help";
+  # Wrong '2.5' version in the code. Drop on next version.
+  version = "2.5";
+};
+```
+
+## `testEqualDerivation` {#tester-testEqualDerivation}
+
+Checks that two packages produce the exact same build instructions.
+
+This can be used to make sure that a certain difference of configuration,
+such as the presence of an overlay does not cause a cache miss.
+
+When the derivations are equal, the return value is an empty file.
+Otherwise, the build log explains the difference via `nix-diff`.
+
+Example:
+
+```nix
+testEqualDerivation
+  "The hello package must stay the same when enabling checks."
+  hello
+  (hello.overrideAttrs(o: { doCheck = true; }))
+```
+
+## `invalidateFetcherByDrvHash` {#tester-invalidateFetcherByDrvHash}
+
+Use the derivation hash to invalidate the output via name, for testing.
+
+Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
+
+Normally, fixed output derivations can and should be cached by their output
+hash only, but for testing we want to re-fetch everytime the fetcher changes.
+
+Changes to the fetcher become apparent in the drvPath, which is a hash of
+how to fetch, rather than a fixed store path.
+By inserting this hash into the name, we can make sure to re-run the fetcher
+every time the fetcher changes.
+
+This relies on the assumption that Nix isn't clever enough to reuse its
+database of local store contents to optimize fetching.
+
+You might notice that the "salted" name derives from the normal invocation,
+not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
+function twice: once to get a derivation hash, and again to produce the final
+fixed output derivation.
+
+Example:
+
+```nix
+tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
+  name = "nix-source";
+  url = "https://github.com/NixOS/nix";
+  rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
+  sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
+};
+```
+
+## `nixosTest` {#tester-nixosTest}
+
+Run a NixOS VM network test using this evaluation of Nixpkgs.
+
+NOTE: This function is primarily for external use. NixOS itself uses `make-test-python.nix` directly. Packages defined in Nixpkgs [reuse NixOS tests via `nixosTests`, plural](#ssec-nixos-tests-linking).
+
+It is mostly equivalent to the function `import ./make-test-python.nix` from the
+[NixOS manual](https://nixos.org/nixos/manual/index.html#sec-nixos-tests),
+except that the current application of Nixpkgs (`pkgs`) will be used, instead of
+letting NixOS invoke Nixpkgs anew.
+
+If a test machine needs to set NixOS options under `nixpkgs`, it must set only the
+`nixpkgs.pkgs` option.
+
+### Parameter
+
+A [NixOS VM test network](https://nixos.org/nixos/manual/index.html#sec-nixos-tests), or path to it. Example:
+
+```nix
+{
+  name = "my-test";
+  nodes = {
+    machine1 = { lib, pkgs, nodes, ... }: {
+      environment.systemPackages = [ pkgs.hello ];
+      services.foo.enable = true;
+    };
+    # machine2 = ...;
+  };
+  testScript = ''
+    start_all()
+    machine1.wait_for_unit("foo.service")
+    machine1.succeed("hello | foo-send")
+  '';
+}
+```
+
+### Result
+
+A derivation that runs the VM test.
+
+Notable attributes:
+
+ * `nodes`: the evaluated NixOS configurations. Useful for debugging and exploring the configuration.
+
+ * `driverInteractive`: a script that launches an interactive Python session in the context of the `testScript`.
diff --git a/doc/builders/trivial-builders.chapter.md b/doc/builders/trivial-builders.chapter.md
index 779a0a801b4ea..c05511785bf55 100644
--- a/doc/builders/trivial-builders.chapter.md
+++ b/doc/builders/trivial-builders.chapter.md
@@ -35,10 +35,10 @@ This works just like `runCommand`. The only difference is that it also provides
 
 ## `runCommandLocal` {#trivial-builder-runCommandLocal}
 
-Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build.
+Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network round-trip and can speed up a build.
 
 ::: {.note}
-This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`.
+This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g., just copying some files to a different location or adding symlinks) because there the `system` is usually the same as `builtins.currentSystem`.
 :::
 
 ## `writeTextFile`, `writeText`, `writeTextDir`, `writeScript`, `writeScriptBin` {#trivial-builder-writeText}
@@ -219,5 +219,5 @@ produces an output path `/nix/store/<hash>-runtime-references` containing
 /nix/store/<hash>-hello-2.10
 ```
 
-but none of `hello`'s dependencies, because those are not referenced directly
+but none of `hello`'s dependencies because those are not referenced directly
 by `hi`'s output.
diff --git a/doc/contributing/coding-conventions.chapter.md b/doc/contributing/coding-conventions.chapter.md
index cfe8582e514a4..9a01b5a0828c8 100644
--- a/doc/contributing/coding-conventions.chapter.md
+++ b/doc/contributing/coding-conventions.chapter.md
@@ -214,15 +214,15 @@ Most of the time, these are the same. For instance, the package `e2fsprogs` has
 
 There are a few naming guidelines:
 
-- The `name` attribute _should_ be identical to the upstream package name.
+- The `pname` attribute _should_ be identical to the upstream package name.
 
-- The `name` attribute _must not_ contain uppercase letters — e.g., `"mplayer-1.0rc2"` instead of `"MPlayer-1.0rc2"`.
+- The `pname` and the `version` attribute _must not_ contain uppercase letters — e.g., `"mplayer" instead of `"MPlayer"`.
 
-- The version part of the `name` attribute _must_ start with a digit (following a dash) — e.g., `"hello-0.3.1rc2"`.
+- The `version` attribute _must_ start with a digit e.g`"0.3.1rc2".
 
-- If a package is not a release but a commit from a repository, then the version part of the name _must_ be the date of that (fetched) commit. The date _must_ be in `"YYYY-MM-DD"` format. Also append `"unstable"` to the name - e.g., `"pkgname-unstable-2014-09-23"`.
+- If a package is not a release but a commit from a repository, then the `version` attribute _must_ be the date of that (fetched) commit. The date _must_ be in `"unstable-YYYY-MM-DD"` format.
 
-- Dashes in the package name _should_ be preserved in new variable names, rather than converted to underscores or camel cased — e.g., `http-parser` instead of `http_parser` or `httpParser`. The hyphenated style is preferred in all three package names.
+- Dashes in the package `pname` _should_ be preserved in new variable names, rather than converted to underscores or camel cased — e.g., `http-parser` instead of `http_parser` or `httpParser`. The hyphenated style is preferred in all three package names.
 
 - If there are multiple versions of a package, this _should_ be reflected in the variable names in `all-packages.nix`, e.g. `json-c_0_9` and `json-c_0_11`. If there is an obvious “default” version, make an attribute like `json-c = json-c_0_9;`. See also [](#sec-versioning)
 
@@ -511,6 +511,8 @@ patches = [
 
 Otherwise, you can add a `.patch` file to the `nixpkgs` repository. In the interest of keeping our maintenance burden to a minimum, only patches that are unique to `nixpkgs` should be added in this way.
 
+If a patch is available online but does not cleanly apply, it can be modified in some fixed ways by using additional optional arguments for `fetchpatch`. Check [](#fetchpatch) for details.
+
 ```nix
 patches = [ ./0001-changes.patch ];
 ```
@@ -538,16 +540,6 @@ If you do need to do create this sort of patch file, one way to do so is with gi
     $ git diff -a > nixpkgs/pkgs/the/package/0001-changes.patch
     ```
 
-If a patch is available online but does not cleanly apply, it can be modified in some fixed ways by using additional optional arguments for `fetchpatch`:
-
-- `stripLen`: Remove the first `stripLen` components of pathnames in the patch.
-- `extraPrefix`: Prefix pathnames by this string.
-- `excludes`: Exclude files matching this pattern.
-- `includes`: Include only files matching this pattern.
-- `revert`: Revert the patch.
-
-Note that because the checksum is computed after applying these effects, using or modifying these arguments will have no effect unless the `sha256` argument is changed as well.
-
 ## Package tests {#sec-package-tests}
 
 Tests are important to ensure quality and make reviews and automatic updates easy.
diff --git a/doc/contributing/reviewing-contributions.chapter.md b/doc/contributing/reviewing-contributions.chapter.md
index 0a90781d0c59e..3417854730ef6 100644
--- a/doc/contributing/reviewing-contributions.chapter.md
+++ b/doc/contributing/reviewing-contributions.chapter.md
@@ -122,10 +122,10 @@ Reviewing process:
   - [CODEOWNERS](https://help.github.com/articles/about-codeowners/) will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers.
 - Ensure that the module tests, if any, are succeeding.
 - Ensure that the introduced options are correct.
-  - Type should be appropriate (string related types differs in their merging capabilities, `optionSet` and `string` types are deprecated).
+  - Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated).
   - Description, default and example should be provided.
 - Ensure that option changes are backward compatible.
-  - `mkRenamedOptionModule` and `mkAliasOptionModule` functions provide way to make option changes backward compatible.
+  - `mkRenamedOptionModuleWith` provides a way to make option changes backward compatible.
 - Ensure that removed options are declared with `mkRemovedOptionModule`
 - Ensure that changes that are not backward compatible are mentioned in release notes.
 - Ensure that documentations affected by the change is updated.
@@ -157,7 +157,7 @@ Reviewing process:
 
 - Ensure that the module tests, if any, are succeeding.
 - Ensure that the introduced options are correct.
-  - Type should be appropriate (string related types differs in their merging capabilities, `optionSet` and `string` types are deprecated).
+  - Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated).
   - Description, default and example should be provided.
 - Ensure that module `meta` field is present
   - Maintainers should be declared in `meta.maintainers`.
diff --git a/doc/contributing/submitting-changes.chapter.md b/doc/contributing/submitting-changes.chapter.md
index 576b0f7d96fe3..471e45d7dfb30 100644
--- a/doc/contributing/submitting-changes.chapter.md
+++ b/doc/contributing/submitting-changes.chapter.md
@@ -96,7 +96,7 @@ We use jbidwatcher as an example for a discontinued project here.
 
 1. Have Nixpkgs checked out locally and up to date.
 1. Create a new branch for your change, e.g. `git checkout -b jbidwatcher`
-1. Remove the actual package including its directory, e.g. `rm -rf pkgs/applications/misc/jbidwatcher`
+1. Remove the actual package including its directory, e.g. `git rm -rf pkgs/applications/misc/jbidwatcher`
 1. Remove the package from the list of all packages (`pkgs/top-level/all-packages.nix`).
 1. Add an alias for the package name in `pkgs/top-level/aliases.nix` (There is also `pkgs/applications/editors/vim/plugins/aliases.nix`. Package sets typically do not have aliases, so we can't add them there.)
 
@@ -236,7 +236,7 @@ The `master` branch is the main development branch. It should only see non-break
 
 ### Staging branch {#submitting-changes-staging-branch}
 
-The `staging` branch is a development branch where mass-rebuilds go. It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
+The `staging` branch is a development branch where mass-rebuilds go. Mass rebuilds are commits that cause rebuilds for many packages, like more than 500 (or perhaps, if it's 'light' packages, 1000). It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
 
 ### Staging-next branch {#submitting-changes-staging-next-branch}
 
diff --git a/doc/doc-support/default.nix b/doc/doc-support/default.nix
index 53990b6771962..7c00195ab3909 100644
--- a/doc/doc-support/default.nix
+++ b/doc/doc-support/default.nix
@@ -23,6 +23,14 @@ let
       <xsl:import href="${./parameters.xml}"/>
     </xsl:stylesheet>
   '';
+
+  # NB: This file describes the Nixpkgs manual, which happens to use module
+  #     docs infra originally developed for NixOS.
+  optionsDoc = pkgs.nixosOptionsDoc {
+    inherit (pkgs.lib.evalModules { modules = [ ../../pkgs/top-level/config.nix ]; }) options;
+    documentType = "none";
+  };
+
 in pkgs.runCommand "doc-support" {}
 ''
   mkdir result
@@ -30,6 +38,7 @@ in pkgs.runCommand "doc-support" {}
     cd result
     ln -s ${locationsXml} ./function-locations.xml
     ln -s ${functionDocs} ./function-docs
+    ln -s ${optionsDoc.optionsDocBook} ./config-options.docbook.xml
 
     ln -s ${pkgs.docbook5}/xml/rng/docbook/docbook.rng ./docbook.rng
     ln -s ${pkgs.docbook_xsl_ns}/xml/xsl ./xsl
diff --git a/doc/functions/library/attrsets.xml b/doc/functions/library/attrsets.xml
index a30f4edf4c19c..052bfa1f6ae34 100644
--- a/doc/functions/library/attrsets.xml
+++ b/doc/functions/library/attrsets.xml
@@ -1474,7 +1474,7 @@ lib.attrsets.zipAttrsWith
  <section xml:id="function-library-lib.attrsets.zipAttrs">
   <title><function>lib.attrsets.zipAttrs</function></title>
 
-  <subtitle><literal>zipAttrsWith :: [ AttrSet ] -> AttrSet</literal>
+  <subtitle><literal>zipAttrs :: [ AttrSet ] -> AttrSet</literal>
   </subtitle>
 
   <xi:include href="./locations.xml" xpointer="lib.attrsets.zipAttrs" />
diff --git a/doc/hooks/index.xml b/doc/hooks/index.xml
new file mode 100644
index 0000000000000..6a046eae28857
--- /dev/null
+++ b/doc/hooks/index.xml
@@ -0,0 +1,10 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+         xmlns:xlink="http://www.w3.org/1999/xlink"
+         xmlns:xi="http://www.w3.org/2001/XInclude"
+         xml:id="chap-hooks">
+ <title>Hooks reference</title>
+ <para>
+  Nixpkgs has several hook packages that augment the stdenv phases.
+ </para>
+ <xi:include href="./postgresql-test-hook.section.xml" />
+</chapter>
diff --git a/doc/hooks/postgresql-test-hook.section.md b/doc/hooks/postgresql-test-hook.section.md
new file mode 100644
index 0000000000000..077fac14ebbff
--- /dev/null
+++ b/doc/hooks/postgresql-test-hook.section.md
@@ -0,0 +1,59 @@
+
+# `postgresqlTestHook` {#sec-postgresqlTestHook}
+
+This hook starts a PostgreSQL server during the `checkPhase`. Example:
+
+```nix
+{ stdenv, postgresql, postgresqlTestHook }:
+stdenv.mkDerivation {
+
+  # ...
+
+  checkInputs = [
+    postgresql
+    postgresqlTestHook
+  ];
+}
+```
+
+If you use a custom `checkPhase`, remember to add the `runHook` calls:
+```nix
+  checkPhase ''
+    runHook preCheck
+
+    # ... your tests
+
+    runHook postCheck
+  ''
+```
+
+## Variables {#sec-postgresqlTestHook-variables}
+
+The hook logic will read a number of variables and set them to a default value if unset or empty.
+
+Exported variables:
+
+ - `PGDATA`: location of server files.
+ - `PGHOST`: location of UNIX domain socket directory; the default `host` in a connection string.
+ - `PGUSER`: user to create / log in with, default: `test_user`.
+ - `PGDATABASE`: database name, default: `test_db`.
+
+Bash-only variables:
+
+ - `postgresqlTestUserOptions`: SQL options to use when creating the `$PGUSER` role, default: `LOGIN`.
+ - `postgresqlTestSetupSQL`: SQL commands to run as database administrator after startup, default: statements that create `$PGUSER` and `$PGDATABASE`.
+ - `postgresqlTestSetupCommands`: bash commands to run after database start, defaults to running `$postgresqlTestSetupSQL` as database administrator.
+ - `postgresqlEnableTCP`: set to `1` to enable TCP listening. Flaky; not recommended.
+ - `postgresqlStartCommands`: defaults to `pg_ctl start`.
+
+## TCP and the Nix sandbox {#sec-postgresqlTestHook-tcp}
+
+`postgresqlEnableTCP` relies on network sandboxing, which is not available on macOS and some custom Nix installations, resulting in flaky tests.
+For this reason, it is disabled by default.
+
+The preferred solution is to make the test suite use a UNIX domain socket connection. This is the default behavior when no `host` connection parameter is provided.
+Some test suites hardcode a value for `host` though, so a patch may be required. If you can upstream the patch, you can make `host` default to the `PGHOST` environment variable when set. Otherwise, you can patch it locally to omit the `host` connection string parameter altogether.
+
+::: {.note}
+The error `libpq: failed (could not receive data from server: Connection refused` is generally an indication that the test suite is trying to connect through TCP.
+:::
diff --git a/doc/languages-frameworks/chicken.section.md b/doc/languages-frameworks/chicken.section.md
new file mode 100644
index 0000000000000..d8c35bd20c506
--- /dev/null
+++ b/doc/languages-frameworks/chicken.section.md
@@ -0,0 +1,49 @@
+# CHICKEN {#sec-chicken}
+
+[CHICKEN](https://call-cc.org/) is a
+[R⁵RS](https://schemers.org/Documents/Standards/R5RS/HTML/)-compliant Scheme
+compiler. It includes an interactive mode and a custom package format, "eggs".
+
+## Using Eggs
+
+Eggs described in nixpkgs are available inside the
+`chickenPackages.chickenEggs` attrset. Including an egg as a build input is
+done in the typical Nix fashion. For example, to include support for [SRFI
+189](https://srfi.schemers.org/srfi-189/srfi-189.html) in a derivation, one
+might write:
+
+```nix
+  buildInputs = [
+    chicken
+    chickenPackages.chickenEggs.srfi-189
+  ];
+```
+
+Both `chicken` and its eggs have a setup hook which configures the environment
+variables `CHICKEN_INCLUDE_PATH` and `CHICKEN_REPOSITORY_PATH`.
+
+## Updating Eggs
+
+nixpkgs only knows about a subset of all published eggs. It uses
+[egg2nix](https://github.com/the-kenny/egg2nix) to generate a
+package set from a list of eggs to include.
+
+The package set is regenerated by running the following shell commands:
+
+```
+$ nix-shell -p chickenPackages.egg2nix
+$ cd pkgs/development/compilers/chicken/5/
+$ egg2nix eggs.scm > eggs.nix
+```
+
+## Adding Eggs
+
+When we run `egg2nix`, we obtain one collection of eggs with
+mutually-compatible versions. This means that when we add new eggs, we may
+need to update existing eggs. To keep those separate, follow the procedure for
+updating eggs before including more eggs.
+
+To include more eggs, edit `pkgs/development/compilers/chicken/5/eggs.scm`.
+The first section of this file lists eggs which are required by `egg2nix`
+itself; all other eggs go into the second section. After editing, follow the
+procedure for updating eggs.
diff --git a/doc/languages-frameworks/coq.section.md b/doc/languages-frameworks/coq.section.md
index 9a692104a0417..11777b5eef42e 100644
--- a/doc/languages-frameworks/coq.section.md
+++ b/doc/languages-frameworks/coq.section.md
@@ -29,14 +29,19 @@ The recommended way of defining a derivation for a Coq library, is to use the `c
 * `releaseRev` (optional, defaults to `(v: v)`), provides a default mapping from release names to revision hashes/branch names/tags,
 * `displayVersion` (optional), provides a way to alter the computation of `name` from `pname`, by explaining how to display version numbers,
 * `namePrefix` (optional, defaults to `[ "coq" ]`), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
-* `extraNativeBuildInputs` (optional), by default `nativeBuildInputs` just contains `coq`, this allows to add more native build inputs, `nativeBuildInputs` are executables and `buildInputs` are libraries and dependencies,
-* `extraBuildInputs` (optional), this allows to add more build inputs,
-* `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against.
-* `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"`  will use dune if the version of the package is greater or equal to `"1.1"`,
+* `nativeBuildInputs` (optional), is a list of executables that are required to build the current derivation, in addition to the default ones (namely `which`, `dune` and `ocaml` depending on whether `useDune2`, `useDune2ifVersion` and `mlPlugin` are set).
+* `extraNativeBuildInputs` (optional, deprecated), an additional list of derivation to add to `nativeBuildInputs`,
+* `overrideNativeBuildInputs` (optional) replaces the default list of derivation to which `nativeBuildInputs` and `extraNativeBuildInputs` adds extra elements,
+* `buildInputs` (optional), is a list of libraries and dependencies that are required to build and run the current derivation, in addition to the default one `[ coq ]`,
+* `extraBuildInputs` (optional, deprecated), an additional list of derivation to add to `buildInputs`,
+* `overrideBuildInputs` (optional) replaces the default list of derivation to which `buildInputs` and `extraBuildInputs` adds extras elements,
+* `propagatedBuildInputs` (optional) is passed as is to `mkDerivation`, we recommend to use this for Coq libraries and Coq plugin dependencies, as this makes sure the paths of the compiled libraries and plugins will always be added to the build environements of subsequent derivation, which is necessary for Coq packages to work correctly,
+* `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `nativeBuildInputs`, `buildInputs`, and `propagatedBuildInputs` to depend on the same package set Coq was built against.
+* `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2ifVersion = versions.isGe "1.1"`  will use dune if the version of the package is greater or equal to `"1.1"`,
 * `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one.
 * `opam-name` (optional, defaults to concatenating with a dash separator the components of `namePrefix` and `pname`), name of the Dune package to build.
 * `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it.
-* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
+* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variables `DESTDIR` and `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
 * `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`,
 * `useMelquiondRemake` (optional, default to `null`) is an attribute set, which, if given, overloads the `preConfigurePhases`, `configureFlags`, `buildPhase`, and `installPhase` attributes of the derivation for a specific use in libraries using `remake` as set up by Guillaume Melquiond for `flocq`, `gappalib`, `interval`, and `coquelicot` (see the corresponding derivation for concrete examples of use of this option). For backward compatibility, the attribute `useMelquiondRemake.logpath` must be set to the logical root of the library (otherwise, one can pass `useMelquiondRemake = {}` to activate this without backward compatibility).
 * `dropAttrs`, `keepAttrs`, `dropDerivationAttrs` are all optional and allow to tune which attribute is added or removed from the final call to `mkDerivation`.
diff --git a/doc/languages-frameworks/cuda.section.md b/doc/languages-frameworks/cuda.section.md
new file mode 100644
index 0000000000000..fccf66bf79d2a
--- /dev/null
+++ b/doc/languages-frameworks/cuda.section.md
@@ -0,0 +1,34 @@
+# CUDA {#cuda}
+
+CUDA-only packages are stored in the `cudaPackages` packages set. This set
+includes the `cudatoolkit`, portions of the toolkit in separate derivations,
+`cudnn`, `cutensor` and `nccl`.
+
+A package set is available for each CUDA version, so for example
+`cudaPackages_11_6`. Within each set is a matching version of the above listed
+packages. Additionally, other versions of the packages that are packaged and
+compatible are available as well. For example, there can be a
+`cudaPackages.cudnn_8_3_2` package.
+
+To use one or more CUDA packages in an expression, give the expression a `cudaPackages` parameter, and in case CUDA is optional
+```nix
+cudaSupport ? false
+cudaPackages ? {}
+```
+
+When using `callPackage`, you can choose to pass in a different variant, e.g.
+when a different version of the toolkit suffices
+```nix
+mypkg = callPackage { cudaPackages = cudaPackages_11_5; }
+```
+
+If another version of say `cudnn` or `cutensor` is needed, you can override the
+package set to make it the default. This guarantees you get a consistent package
+set.
+```nix
+mypkg = let
+  cudaPackages = cudaPackages_11_5.overrideScope' (final: prev {
+    cudnn = prev.cudnn_8_3_2;
+  }});
+in callPackage { inherit cudaPackages; };
+```
diff --git a/doc/languages-frameworks/gnome.section.md b/doc/languages-frameworks/gnome.section.md
index 29cb2e0e464a2..d5996cce13cfd 100644
--- a/doc/languages-frameworks/gnome.section.md
+++ b/doc/languages-frameworks/gnome.section.md
@@ -42,7 +42,21 @@ Unlike other libraries mentioned in this section, GdkPixbuf only supports a sing
 
 ### Icons {#ssec-gnome-icons}
 
-When an application uses icons, an icon theme should be available in `XDG_DATA_DIRS` during runtime. The package for the default, icon-less [hicolor-icon-theme](https://www.freedesktop.org/wiki/Software/icon-theme/) (should be propagated by every icon theme) contains [a setup hook](#ssec-gnome-hooks-hicolor-icon-theme) that will pick up icon themes from `buildInputs` and pass it to our wrapper. Unfortunately, relying on that would mean every user has to download the theme included in the package expression no matter their preference. For that reason, we leave the installation of icon theme on the user. If you use one of the desktop environments, you probably already have an icon theme installed.
+When an application uses icons, an icon theme should be available in `XDG_DATA_DIRS` during runtime. The package for the default, icon-less [hicolor-icon-theme](https://www.freedesktop.org/wiki/Software/icon-theme/) (should be propagated by every icon theme) contains [a setup hook](#ssec-gnome-hooks-hicolor-icon-theme) that will pick up icon themes from `buildInputs` and add their datadirs to `XDG_ICON_DIRS` environment variable (this is Nixpkgs specific, not actually a XDG standard variable). Unfortunately, relying on that would mean every user has to download the theme included in the package expression no matter their preference. For that reason, we leave the installation of icon theme on the user. If you use one of the desktop environments, you probably already have an icon theme installed.
+
+In the rare case you need to use icons from dependencies (e.g. when an app forces an icon theme), you can use the following to pick them up:
+
+```nix
+  buildInputs = [
+    pantheon.elementary-icon-theme
+  ];
+  preFixup = ''
+    gappsWrapperArgs+=(
+      # The icon theme is hardcoded.
+      --prefix XDG_DATA_DIRS : "$XDG_ICON_DIRS"
+    )
+  '';
+```
 
 To avoid costly file system access when locating icons, GTK, [as well as Qt](https://woboq.com/blog/qicon-reads-gtk-icon-cache-in-qt57.html), can rely on `icon-theme.cache` files from the themes' top-level directories. These files are generated using `gtk-update-icon-cache`, which is expected to be run whenever an icon is added or removed to an icon theme (typically an application icon into `hicolor` theme) and some programs do indeed run this after icon installation. However, since packages are installed into their own prefix by Nix, this would lead to conflicts. For that reason, `gtk3` provides a [setup hook](#ssec-gnome-hooks-gtk-drop-icon-theme-cache) that will clean the file from installation. Since most applications only ship their own icon that will be loaded on start-up, it should not affect them too much. On the other hand, icon themes are much larger and more widely used so we need to cache them. Because we recommend installing icon themes globally, we will generate the cache files from all packages in a profile using a NixOS module. You can enable the cache generation using `gtk.iconCache.enable` option if your desktop environment does not already do that.
 
@@ -98,7 +112,7 @@ For convenience, it also adds `dconf.lib` for a GIO module implementing a GSetti
 
 - []{#ssec-gnome-hooks-dconf} `dconf.lib` is a dependency of `wrapGAppsHook`, which then also adds it to the `GIO_EXTRA_MODULES` variable.
 
-- []{#ssec-gnome-hooks-hicolor-icon-theme} `hicolor-icon-theme`’s setup hook will add icon themes to `XDG_ICON_DIRS` which is prepended to `XDG_DATA_DIRS` by `wrapGAppsHook`.
+- []{#ssec-gnome-hooks-hicolor-icon-theme} `hicolor-icon-theme`’s setup hook will add icon themes to `XDG_ICON_DIRS`.
 
 - []{#ssec-gnome-hooks-gobject-introspection} `gobject-introspection` setup hook populates `GI_TYPELIB_PATH` variable with `lib/girepository-1.0` directories of dependencies, which is then added to wrapper by `wrapGAppsHook`. It also adds `share` directories of dependencies to `XDG_DATA_DIRS`, which is intended to promote GIR files but it also [pollutes the closures](https://github.com/NixOS/nixpkgs/issues/32790) of packages using `wrapGAppsHook`.
 
diff --git a/doc/languages-frameworks/go.section.md b/doc/languages-frameworks/go.section.md
index 411205d08e430..9c67a514335ed 100644
--- a/doc/languages-frameworks/go.section.md
+++ b/doc/languages-frameworks/go.section.md
@@ -142,4 +142,8 @@ Removes the pre-existing vendor directory. This should only be used if the depen
 
 ### `subPackages` {#var-go-subPackages}
 
-Limits the builder from building child packages that have not been listed. If `subPackages` is not specified, all child packages will be built.
+Specified as a string or list of strings. Limits the builder from building child packages that have not been listed. If `subPackages` is not specified, all child packages will be built.
+
+### `excludedPackages` {#var-go-excludedPackages}
+
+Specified as a string or list of strings. Causes the builder to skip building child packages that match any of the provided values. If `excludedPackages` is not specified, all child packages will be built.
diff --git a/doc/languages-frameworks/index.xml b/doc/languages-frameworks/index.xml
index f221693e764c8..3d5b2f738976d 100644
--- a/doc/languages-frameworks/index.xml
+++ b/doc/languages-frameworks/index.xml
@@ -9,8 +9,10 @@
  <xi:include href="android.section.xml" />
  <xi:include href="beam.section.xml" />
  <xi:include href="bower.section.xml" />
+ <xi:include href="chicken.section.xml" />
  <xi:include href="coq.section.xml" />
  <xi:include href="crystal.section.xml" />
+ <xi:include href="cuda.section.xml" />
  <xi:include href="dhall.section.xml" />
  <xi:include href="dotnet.section.xml" />
  <xi:include href="emscripten.section.xml" />
diff --git a/doc/languages-frameworks/javascript.section.md b/doc/languages-frameworks/javascript.section.md
index bf5742d6855e4..19e31ea690263 100644
--- a/doc/languages-frameworks/javascript.section.md
+++ b/doc/languages-frameworks/javascript.section.md
@@ -8,19 +8,16 @@ The various tools available will be listed in the [tools-overview](#javascript-t
 
 ## Getting unstuck / finding code examples
 
-If you find you are lacking inspiration for packing javascript applications, the links below might prove useful.
-Searching online for prior art can be helpful if you are running into solved problems.
+If you find you are lacking inspiration for packing javascript applications, the links below might prove useful. Searching online for prior art can be helpful if you are running into solved problems.
 
 ### Github
 
 - Searching Nix files for `mkYarnPackage`: <https://github.com/search?q=mkYarnPackage+language%3ANix&type=code>
-
 - Searching just `flake.nix` files for `mkYarnPackage`: <https://github.com/search?q=mkYarnPackage+filename%3Aflake.nix&type=code>
 
 ### Gitlab
 
 - Searching Nix files for `mkYarnPackage`: <https://gitlab.com/search?scope=blobs&search=mkYarnPackage+extension%3Anix>
-
 - Searching just `flake.nix` files for `mkYarnPackage`: <https://gitlab.com/search?scope=blobs&search=mkYarnPackage+filename%3Aflake.nix>
 
 ## Tools overview {#javascript-tools-overview}
@@ -35,107 +32,128 @@ It is often not documented which node version is used upstream, but if it is, tr
 
 This can be a problem if upstream is using the latest and greatest and you are trying to use an earlier version of node. Some cryptic errors regarding V8 may appear.
 
-An exception to this:
-
 ### Try to respect the package manager originally used by upstream (and use the upstream lock file) {#javascript-upstream-package-manager}
 
 A lock file (package-lock.json, yarn.lock...) is supposed to make reproducible installations of node_modules for each tool.
 
 Guidelines of package managers, recommend to commit those lock files to the repos. If a particular lock file is present, it is a strong indication of which package manager is used upstream.
 
-It's better to try to use a nix tool that understand the lock file. Using a different tool might give you hard to understand error because different packages have been installed. An example of problems that could arise can be found [here](https://github.com/NixOS/nixpkgs/pull/126629). Upstream uses npm, but this is an attempt to package it with yarn2nix (that uses yarn.lock)
+It's better to try to use a Nix tool that understand the lock file. Using a different tool might give you hard to understand error because different packages have been installed. An example of problems that could arise can be found [here](https://github.com/NixOS/nixpkgs/pull/126629). Upstream use NPM, but this is an attempt to package it with `yarn2nix` (that uses yarn.lock).
 
 Using a different tool forces to commit a lock file to the repository. Those files are fairly large, so when packaging for nixpkgs, this approach does not scale well.
 
 Exceptions to this rule are:
 
-- when you encounter one of the bugs from a nix tool. In each of the tool specific instructions, known problems will be detailed. If you have a problem with a particular tool, then it's best to try another tool, even if this means you will have to recreate a lock file and commit it to nixpkgs. In general yarn2nix has less known problems and so a simple search in nixpkgs will reveal many yarn.lock files committed
-- Some lock files contain particular version of a package that has been pulled off npm for some reason. In that case, you can recreate upstream lock (by removing the original and `npm install`, `yarn`, ...) and commit this to nixpkgs.
-- The only tool that supports workspaces (a feature of npm that helps manage sub-directories with different package.json from a single top level package.json) is yarn2nix. If upstream has workspaces you should try yarn2nix.
+- When you encounter one of the bugs from a Nix tool. In each of the tool specific instructions, known problems will be detailed. If you have a problem with a particular tool, then it's best to try another tool, even if this means you will have to recreate a lock file and commit it to nixpkgs. In general `yarn2nix` has less known problems and so a simple search in nixpkgs will reveal many yarn.lock files committed.
+- Some lock files contain particular version of a package that has been pulled off NPM for some reason. In that case, you can recreate upstream lock (by removing the original and `npm install`, `yarn`, ...) and commit this to nixpkgs.
+- The only tool that supports workspaces (a feature of NPM that helps manage sub-directories with different package.json from a single top level package.json) is `yarn2nix`. If upstream has workspaces you should try `yarn2nix`.
 
 ### Try to use upstream package.json {#javascript-upstream-package-json}
 
-Exceptions to this rule are
+Exceptions to this rule are:
 
-- Sometimes the upstream repo assumes some dependencies be installed globally. In that case you can add them manually to the upstream package.json (`yarn add xxx` or `npm install xxx`, ...). Dependencies that are installed locally can be executed with `npx` for cli tools. (e.g. `npx postcss ...`, this is how you can call those dependencies in the phases).
-- Sometimes there is a version conflict between some dependency requirements. In that case you can fix a version (by removing the `^`).
-- Sometimes the script defined in the package.json does not work as is. Some scripts for example use cli tools that might not be available, or cd in directory with a different package.json (for workspaces notably). In that case, it's perfectly fine to look at what the particular script is doing and break this down in the phases. In the build script you can see `build:*` calling in turns several other build scripts like `build:ui` or `build:server`. If one of those fails, you can try to separate those into:
+- Sometimes the upstream repo assumes some dependencies be installed globally. In that case you can add them manually to the upstream package.json (`yarn add xxx` or `npm install xxx`, ...). Dependencies that are installed locally can be executed with `npx` for CLI tools. (e.g. `npx postcss ...`, this is how you can call those dependencies in the phases).
+- Sometimes there is a version conflict between some dependency requirements. In that case you can fix a version by removing the `^`.
+- Sometimes the script defined in the package.json does not work as is. Some scripts for example use CLI tools that might not be available, or cd in directory with a different package.json (for workspaces notably). In that case, it's perfectly fine to look at what the particular script is doing and break this down in the phases. In the build script you can see `build:*` calling in turns several other build scripts like `build:ui` or `build:server`. If one of those fails, you can try to separate those into,
 
-```Shell
-yarn build:ui
-yarn build:server
-# OR
-npm run build:ui
-npm run build:server
-```
+  ```sh
+  yarn build:ui
+  yarn build:server
+  # OR
+  npm run build:ui
+  npm run build:server
+  ```
 
-when you need to override a package.json. It's nice to use the one from the upstream src and do some explicit override. Here is an example.
+  when you need to override a package.json. It's nice to use the one from the upstream source and do some explicit override. Here is an example:
 
-```nix
-patchedPackageJSON = final.runCommand "package.json" { } ''
-  ${jq}/bin/jq '.version = "0.4.0" |
-    .devDependencies."@jsdoc/cli" = "^0.2.5"
-    ${sonar-src}/package.json > $out
-'';
-```
+  ```nix
+  patchedPackageJSON = final.runCommand "package.json" { } ''
+    ${jq}/bin/jq '.version = "0.4.0" |
+      .devDependencies."@jsdoc/cli" = "^0.2.5"
+      ${sonar-src}/package.json > $out
+  '';
+  ```
 
-you will still need to commit the modified version of the lock files, but at least the overrides are explicit for everyone to see.
+  You will still need to commit the modified version of the lock files, but at least the overrides are explicit for everyone to see.
 
 ### Using node_modules directly {#javascript-using-node_modules}
 
-each tool has an abstraction to just build the node_modules (dependencies) directory. you can always use the stdenv.mkDerivation with the node_modules to build the package (symlink the node_modules directory and then use the package build command). the node_modules abstraction can be also used to build some web framework frontends. For an example of this see how [plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix) is built. mkYarnModules to make the derivation containing node_modules. Then when building the frontend you can just symlink the node_modules directory
+Each tool has an abstraction to just build the node_modules (dependencies) directory. You can always use the `stdenv.mkDerivation` with the node_modules to build the package (symlink the node_modules directory and then use the package build command). The node_modules abstraction can be also used to build some web framework frontends. For an example of this see how [plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix) is built. `mkYarnModules` to make the derivation containing node_modules. Then when building the frontend you can just symlink the node_modules directory.
 
-## javascript packages inside nixpkgs {#javascript-packages-nixpkgs}
+## Javascript packages inside nixpkgs {#javascript-packages-nixpkgs}
 
-The `pkgs/development/node-packages` folder contains a generated collection of
-[NPM packages](https://npmjs.com/) that can be installed with the Nix package
-manager.
+The [pkgs/development/node-packages](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages) folder contains a generated collection of [NPM packages](https://npmjs.com/) that can be installed with the Nix package manager.
 
-As a rule of thumb, the package set should only provide _end user_ software
-packages, such as command-line utilities. Libraries should only be added to the
-package set if there is a non-NPM package that requires it.
+As a rule of thumb, the package set should only provide _end user_ software packages, such as command-line utilities. Libraries should only be added to the package set if there is a non-NPM package that requires it.
 
-When it is desired to use NPM libraries in a development project, use the
-`node2nix` generator directly on the `package.json` configuration file of the
-project.
+When it is desired to use NPM libraries in a development project, use the `node2nix` generator directly on the `package.json` configuration file of the project.
 
-The package set provides support for the official stable Node.js versions.
-The latest stable LTS release in `nodePackages`, as well as the latest stable
-Current release in `nodePackages_latest`.
+The package set provides support for the official stable Node.js versions. The latest stable LTS release in `nodePackages`, as well as the latest stable current release in `nodePackages_latest`.
 
-If your package uses native addons, you need to examine what kind of native
-build system it uses. Here are some examples:
+If your package uses native addons, you need to examine what kind of native build system it uses. Here are some examples:
 
 - `node-gyp`
 - `node-gyp-builder`
 - `node-pre-gyp`
 
-After you have identified the correct system, you need to override your package
-expression while adding in build system as a build input. For example, `dat`
-requires `node-gyp-build`, so [we override](https://github.com/NixOS/nixpkgs/blob/32f5e5da4a1b3f0595527f5195ac3a91451e9b56/pkgs/development/node-packages/default.nix#L37-L40) its expression in [`default.nix`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/default.nix):
+After you have identified the correct system, you need to override your package expression while adding in build system as a build input. For example, `dat` requires `node-gyp-build`, so we override its expression in [pkgs/development/node-packages/overrides.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/overrides.nix):
 
 ```nix
-    dat = super.dat.override {
-      buildInputs = [ self.node-gyp-build pkgs.libtool pkgs.autoconf pkgs.automake ];
-      meta.broken = since "12";
-    };
+    dat = prev.dat.override (oldAttrs: {
+      buildInputs = [ final.node-gyp-build pkgs.libtool pkgs.autoconf pkgs.automake ];
+      meta = oldAttrs.meta // { broken = since "12"; };
+    });
 ```
 
+### Adding and Updating Javascript packages in nixpkgs
+
 To add a package from NPM to nixpkgs:
 
-1. Modify `pkgs/development/node-packages/node-packages.json` to add, update
-    or remove package entries to have it included in `nodePackages` and
-    `nodePackages_latest`.
-2. Run the script: `cd pkgs/development/node-packages && ./generate.sh`.
+1. Modify [pkgs/development/node-packages/node-packages.json](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/node-packages.json) to add, update or remove package entries to have it included in `nodePackages` and `nodePackages_latest`.
+2. Run the script:
+
+   ```sh
+   ./pkgs/development/node-packages/generate.sh
+   ```
+
 3. Build your new package to test your changes:
-    `cd /path/to/nixpkgs && nix-build -A nodePackages.<new-or-updated-package>`.
-    To build against the latest stable Current Node.js version (e.g. 14.x):
-    `nix-build -A nodePackages_latest.<new-or-updated-package>`
-4. Add and commit all modified and generated files.
 
-For more information about the generation process, consult the
-[README.md](https://github.com/svanderburg/node2nix) file of the `node2nix`
-tool.
+   ```sh
+   nix-build -A nodePackages.<new-or-updated-package>
+   ```
+
+    To build against the latest stable Current Node.js version (e.g. 18.x):
+
+    ```sh
+    nix-build -A nodePackages_latest.<new-or-updated-package>
+    ```
+
+    If the package doesn't build, you may need to add an override as explained above.
+4. If the package's name doesn't match any of the executables it provides, add an entry in [pkgs/development/node-packages/main-programs.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/main-programs.nix). This will be the case for all scoped packages, e.g., `@angular/cli`.
+5. Add and commit all modified and generated files.
+
+For more information about the generation process, consult the [README.md](https://github.com/svanderburg/node2nix) file of the `node2nix` tool.
+
+To update NPM packages in nixpkgs, run the same `generate.sh` script:
+
+```sh
+./pkgs/development/node-packages/generate.sh
+```
+
+#### Git protocol error
+
+Some packages may have Git dependencies from GitHub specified with `git://`.
+GitHub has [disabled unecrypted Git connections](https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git), so you may see the following error when running the generate script:
+
+```
+The unauthenticated git protocol on port 9418 is no longer supported
+```
+
+Use the following Git configuration to resolve the issue:
+
+```sh
+git config --global url."https://github.com/".insteadOf git://github.com/
+```
 
 ## Tool specific instructions {#javascript-tool-specific}
 
@@ -143,34 +161,33 @@ tool.
 
 #### Preparation {#javascript-node2nix-preparation}
 
-you will need to generate a nix expression for the dependencies
+You will need to generate a Nix expression for the dependencies. Don't forget the `-l package-lock.json` if there is a lock file. Most probably you will need the `--development` to include the `devDependencies`
 
-- don't forget the `-l package-lock.json` if there is a lock file
-- Most probably you will need the `--development` to include the `devDependencies`
-
-so the command will most likely be
-`node2nix --development -l package-lock.json`
+So the command will most likely be:
+```sh
+node2nix --development -l package-lock.json
+```
 
-[link to the doc in the repo](https://github.com/svanderburg/node2nix)
+See `node2nix` [docs](https://github.com/svanderburg/node2nix) for more info.
 
 #### Pitfalls {#javascript-node2nix-pitfalls}
 
-- if upstream package.json does not have a "version" attribute, node2nix will crash. You will need to add it like shown in [the package.json section](#javascript-upstream-package-json)
-- node2nix has some [bugs](https://github.com/svanderburg/node2nix/issues/238). related to working with lock files from npm distributed with nodejs-16_x
-- node2nix does not like missing packages from npm. If you see something like `Cannot resolve version: vue-loader-v16@undefined` then you might want to try another tool. The package might have been pulled off of npm.
+- If upstream package.json does not have a "version" attribute, `node2nix` will crash. You will need to add it like shown in [the package.json section](#javascript-upstream-package-json).
+- `node2nix` has some [bugs](https://github.com/svanderburg/node2nix/issues/238) related to working with lock files from NPM distributed with `nodejs-16_x`.
+- `node2nix` does not like missing packages from NPM. If you see something like `Cannot resolve version: vue-loader-v16@undefined` then you might want to try another tool. The package might have been pulled off of NPM.
 
 ### yarn2nix {#javascript-yarn2nix}
 
 #### Preparation {#javascript-yarn2nix-preparation}
 
-you will need at least a yarn.lock and yarn.nix file
+You will need at least a yarn.lock and yarn.nix file.
 
-- generate a yarn.lock in upstream if it is not already there
-- `yarn2nix > yarn.nix` will generate the dependencies in a nix format
+- Generate a yarn.lock in upstream if it is not already there.
+- `yarn2nix > yarn.nix` will generate the dependencies in a Nix format.
 
 #### mkYarnPackage {#javascript-yarn2nix-mkYarnPackage}
 
-this will by default try to generate a binary. For package only generating static assets (Svelte, Vue, React...), you will need to explicitly override the build step with your instructions. It's important to use the `--offline` flag. For example if you script is `"build": "something"` in package.json use
+This will by default try to generate a binary. For package only generating static assets (Svelte, Vue, React...), you will need to explicitly override the build step with your instructions. It's important to use the `--offline` flag. For example if you script is `"build": "something"` in package.json use:
 
 ```nix
 buildPhase = ''
@@ -178,14 +195,13 @@ buildPhase = ''
 '';
 ```
 
-The dist phase is also trying to build a binary, the only way to override it is with
+The dist phase is also trying to build a binary, the only way to override it is with:
 
 ```nix
 distPhase = "true";
 ```
 
-the configure phase can sometimes fail because it tries to be too clever.
-One common override is
+The configure phase can sometimes fail because it tries to be too clever. One common override is:
 
 ```nix
 configurePhase = "ln -s $node_modules node_modules";
@@ -193,13 +209,17 @@ configurePhase = "ln -s $node_modules node_modules";
 
 #### mkYarnModules {#javascript-yarn2nix-mkYarnModules}
 
-this will generate a derivation including the node_modules. If you have to build a derivation for an integrated web framework (rails, phoenix..), this is probably the easiest way. [Plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix#L39) offers a good example of how to do this.
+This will generate a derivation including the node_modules. If you have to build a derivation for an integrated web framework (rails, phoenix..), this is probably the easiest way. [Plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix#L39) offers a good example of how to do this.
 
 #### Overriding dependency behavior
 
 In the `mkYarnPackage` record the property `pkgConfig` can be used to override packages when you encounter problems building.
 
-For instance, say your package is throwing errors when trying to invoke node-sass: `ENOENT: no such file or directory, scandir '/build/source/node_modules/node-sass/vendor'`
+For instance, say your package is throwing errors when trying to invoke node-sass:
+
+```
+ENOENT: no such file or directory, scandir '/build/source/node_modules/node-sass/vendor'
+```
 
 To fix this we will specify different versions of build inputs to use, as well as some post install steps to get the software built the way we want:
 
@@ -219,9 +239,8 @@ mkYarnPackage rec {
 
 #### Pitfalls {#javascript-yarn2nix-pitfalls}
 
-- if version is missing from upstream package.json, yarn will silently install nothing. In that case, you will need to override package.json as shown in the [package.json section](#javascript-upstream-package-json)
-
-- having trouble with node-gyp? Try adding these lines to the `yarnPreBuild` steps:
+- If version is missing from upstream package.json, yarn will silently install nothing. In that case, you will need to override package.json as shown in the [package.json section](#javascript-upstream-package-json)
+- Having trouble with `node-gyp`? Try adding these lines to the `yarnPreBuild` steps:
 
   ```nix
   yarnPreBuild = ''
@@ -237,20 +256,20 @@ mkYarnPackage rec {
 
 ## Outside of nixpkgs {#javascript-outside-nixpkgs}
 
-There are some other options available that can't be used inside nixpkgs. Those other options are written in nix. Importing them in nixpkgs will require moving the source code into nixpkgs. Using [Import From Derivation](https://nixos.wiki/wiki/Import_From_Derivation) is not allowed in hydra at present. If you are packaging something outside nixpkgs, those can be considered
+There are some other options available that can't be used inside nixpkgs. Those other options are written in Nix. Importing them in nixpkgs will require moving the source code into nixpkgs. Using [Import From Derivation](https://nixos.wiki/wiki/Import_From_Derivation) is not allowed in Hydra at present. If you are packaging something outside nixpkgs, those can be considered
 
 ### npmlock2nix {#javascript-npmlock2nix}
 
-[npmlock2nix](https://github.com/nix-community/npmlock2nix) aims at building node_modules without code generation. It hasn't reached v1 yet, the api might be subject to change.
+[npmlock2nix](https://github.com/nix-community/npmlock2nix) aims at building node_modules without code generation. It hasn't reached v1 yet, the API might be subject to change.
 
 #### Pitfalls {#javascript-npmlock2nix-pitfalls}
 
-- there are some [problems with npm v7](https://github.com/tweag/npmlock2nix/issues/45).
+There are some [problems with npm v7](https://github.com/tweag/npmlock2nix/issues/45).
 
 ### nix-npm-buildpackage {#javascript-nix-npm-buildpackage}
 
-[nix-npm-buildpackage](https://github.com/serokell/nix-npm-buildpackage) aims at building node_modules without code generation. It hasn't reached v1 yet, the api might change. It supports both package-lock.json and yarn.lock.
+[nix-npm-buildpackage](https://github.com/serokell/nix-npm-buildpackage) aims at building node_modules without code generation. It hasn't reached v1 yet, the API might change. It supports both package-lock.json and yarn.lock.
 
 #### Pitfalls {#javascript-nix-npm-buildpackage-pitfalls}
 
-- there are some [problems with npm v7](https://github.com/serokell/nix-npm-buildpackage/issues/33).
+There are some [problems with npm v7](https://github.com/serokell/nix-npm-buildpackage/issues/33).
diff --git a/doc/languages-frameworks/ocaml.section.md b/doc/languages-frameworks/ocaml.section.md
index 47035551d4181..c6e40eaa20d00 100644
--- a/doc/languages-frameworks/ocaml.section.md
+++ b/doc/languages-frameworks/ocaml.section.md
@@ -38,8 +38,12 @@ Here is a simple package example.
 
 - It uses the `fetchFromGitHub` fetcher to get its source.
 
-- `useDune2 = true` ensures that the latest version of Dune is used for the
-  build (this may become the default value in a future release).
+- `duneVersion = "2"` ensures that Dune version 2 is used for the
+  build (this is the default; valid values are `"1"`, `"2"`, and `"3"`);
+  note that there is also a legacy `useDune2` boolean attribute:
+  set to `false` it corresponds to `duneVersion = "1"`; set to `true` it
+  corresponds to `duneVersion = "2"`. If both arguments (`duneVersion` and
+  `useDune2`) are given, the second one (`useDune2`) is silently ignored.
 
 - It sets the optional `doCheck` attribute such that tests will be run with
   `dune runtest -p angstrom` after the build (`dune build -p angstrom`) is
@@ -67,7 +71,7 @@ Here is a simple package example.
 buildDunePackage rec {
   pname = "angstrom";
   version = "0.15.0";
-  useDune2 = true;
+  duneVersion = "2";
 
   minimalOCamlVersion = "4.04";
 
diff --git a/doc/languages-frameworks/php.section.md b/doc/languages-frameworks/php.section.md
index 5977363323f18..8600e49d4570f 100644
--- a/doc/languages-frameworks/php.section.md
+++ b/doc/languages-frameworks/php.section.md
@@ -9,7 +9,7 @@ wide variety of extensions and libraries available.
 
 The different versions of PHP that nixpkgs provides are located under
 attributes named based on major and minor version number; e.g.,
-`php74` is PHP 7.4.
+`php81` is PHP 8.1.
 
 Only versions of PHP that are supported by upstream for the entirety
 of a given NixOS release will be included in that release of
@@ -23,7 +23,7 @@ NixOS - not necessarily the latest major release from upstream.
 All available PHP attributes are wrappers around their respective
 binary PHP package and provide commonly used extensions this way. The
 real PHP 7.4 package, i.e. the unwrapped one, is available as
-`php74.unwrapped`; see the next section for more details.
+`php81.unwrapped`; see the next section for more details.
 
 Interactive tools built on PHP are put in `php.packages`; composer is
 for example available at `php.packages.composer`.
diff --git a/doc/languages-frameworks/python.section.md b/doc/languages-frameworks/python.section.md
index 693ea016e0a55..3211ae616a1cd 100644
--- a/doc/languages-frameworks/python.section.md
+++ b/doc/languages-frameworks/python.section.md
@@ -288,7 +288,7 @@ self: super: {
         ps: with ps; [
           pyflakes
           pytest
-          python-language-server
+          black
         ]
       ))
 
@@ -663,6 +663,70 @@ However, this is done in it's own phase, and not dependent on whether `doCheck =
 This can also be useful in verifying that the package doesn't assume commonly
 present packages (e.g. `setuptools`)
 
+#### Using pythonRelaxDepsHook {#using-pythonrelaxdepshook}
+
+It is common for upstream to specify a range of versions for its package
+dependencies. This makes sense, since it ensures that the package will be built
+with a subset of packages that is well tested. However, this commonly causes
+issues when packaging in Nixpkgs, because the dependencies that this package
+may need are too new or old for the package to build correctly. We also cannot
+package multiple versions of the same package since this may cause conflicts
+in `PYTHONPATH`.
+
+One way to side step this issue is to relax the dependencies. This can be done
+by either removing the package version range or by removing the package
+declaration entirely. This can be done using the `pythonRelaxDepsHook` hook. For
+example, given the following `requirements.txt` file:
+
+```
+pkg1<1.0
+pkg2
+pkg3>=1.0,<=2.0
+```
+
+we can do:
+
+```
+  nativeBuildInputs = [ pythonRelaxDepsHook ];
+  pythonRelaxDeps = [ "pkg1" "pkg3" ];
+  pythonRemoveDeps = [ "pkg2" ];
+```
+
+which would result in the following `requirements.txt` file:
+
+```
+pkg1
+pkg3
+```
+
+Another option is to pass `true`, that will relax/remove all dependencies, for
+example:
+
+```
+  nativeBuildInputs = [ pythonRelaxDepsHook ];
+  pythonRelaxDeps = true;
+```
+
+which would result in the following `requirements.txt` file:
+
+```
+pkg1
+pkg2
+pkg3
+```
+
+In general you should always use `pythonRelaxDeps`, because `pythonRemoveDeps`
+will convert build errors in runtime errors. However `pythonRemoveDeps` may
+still be useful in exceptional cases, and also to remove dependencies wrongly
+declared by upstream (for example, declaring `black` as a runtime dependency
+instead of a dev dependency).
+
+Keep in mind that while the examples above are done with `requirements.txt`,
+`pythonRelaxDepsHook` works by modifying the resulting wheel file, so it should
+work in any of the formats supported by `buildPythonPackage` currently,
+with the exception of `other` (see `format` in
+[`buildPythonPackage` parameters](#buildpythonpackage-parameters) for more details).
+
 ### Develop local package {#develop-local-package}
 
 As a Python developer you're likely aware of [development mode](http://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode)
@@ -982,12 +1046,13 @@ in python.withPackages(ps: [ps.blaze])).env
 #### Optional extra dependencies
 
 Some packages define optional dependencies for additional features. With
-`setuptools` this is called `extras_require` and `flit` calls it `extras-require`. A
+`setuptools` this is called `extras_require` and `flit` calls it
+`extras-require`, while PEP 621 calls these `optional-dependencies`. A
 method for supporting this is by declaring the extras of a package in its
 `passthru`, e.g. in case of the package `dask`
 
 ```nix
-passthru.extras-require = {
+passthru.optional-dependencies = {
   complete = [ distributed ];
 };
 ```
@@ -997,7 +1062,7 @@ and letting the package requiring the extra add the list to its dependencies
 ```nix
 propagatedBuildInputs = [
   ...
-] ++ dask.extras-require.complete;
+] ++ dask.optional-dependencies.complete;
 ```
 
 Note this method is preferred over adding parameters to builders, as that can
@@ -1196,6 +1261,8 @@ are used in `buildPythonPackage`.
   to run commands only after venv is first created.
 - `wheelUnpackHook` to move a wheel to the correct folder so it can be installed
   with the `pipInstallHook`.
+- `pythonRelaxDepsHook` will relax Python dependencies restrictions for the package.
+  See [example usage](#using-pythonrelaxdepshook).
 
 ### Development mode {#development-mode}
 
diff --git a/doc/languages-frameworks/texlive.section.md b/doc/languages-frameworks/texlive.section.md
index 6b505cefcc95c..060f5c647c296 100644
--- a/doc/languages-frameworks/texlive.section.md
+++ b/doc/languages-frameworks/texlive.section.md
@@ -6,7 +6,7 @@ Since release 15.09 there is a new TeX Live packaging that lives entirely under
 
 - For basic usage just pull `texlive.combined.scheme-basic` for an environment with basic LaTeX support.
 
-- It typically won't work to use separately installed packages together. Instead, you can build a custom set of packages like this:
+- It typically won't work to use separately installed packages together. Instead, you can build a custom set of packages like this. Most CTAN packages should be available:
 
   ```nix
   texlive.combine {
diff --git a/doc/languages-frameworks/vim.section.md b/doc/languages-frameworks/vim.section.md
index a615d585b151c..6d7efe455b136 100644
--- a/doc/languages-frameworks/vim.section.md
+++ b/doc/languages-frameworks/vim.section.md
@@ -18,7 +18,7 @@ Adding custom .vimrc lines can be done using the following code:
 
 ```nix
 vim_configurable.customize {
-  # `name` specifies the name of the executable and package
+  # `name` optionally specifies the name of the executable and package
   name = "vim-with-plugins";
 
   vimrcConfig.customRC = ''
@@ -28,6 +28,9 @@ vim_configurable.customize {
 ```
 
 This configuration is used when Vim is invoked with the command specified as name, in this case `vim-with-plugins`.
+You can also omit `name` to customize Vim itself. See the
+[definition of `vimUtils.makeCustomizable`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/editors/vim/plugins/vim-utils.nix#L408)
+for all supported options.
 
 For Neovim the `configure` argument can be overridden to achieve the same:
 
@@ -274,9 +277,6 @@ my-vim =
        copy paste output2 here
     ];
 
-    # Pathogen would be
-    # vimrcConfig.pathogen.knownPlugins = plugins; # plugins
-    # vimrcConfig.pathogen.pluginNames = ["tlib"];
   };
 ```
 
@@ -286,7 +286,7 @@ Sample output1:
 "reload" = buildVimPluginFrom2Nix { # created by nix#NixDerivation
   name = "reload";
   src = fetchgit {
-    url = "git://github.com/xolox/vim-reload";
+    url = "https://github.com/xolox/vim-reload";
     rev = "0a601a668727f5b675cb1ddc19f6861f3f7ab9e1";
     sha256 = "0vb832l9yxj919f5hfg6qj6bn9ni57gnjd3bj7zpq7d4iv2s4wdh";
   };
diff --git a/doc/manual.xml b/doc/manual.xml
index b43021d85ca56..ccbaf40586d1a 100644
--- a/doc/manual.xml
+++ b/doc/manual.xml
@@ -25,8 +25,10 @@
   <title>Builders</title>
   <xi:include href="builders/fetchers.chapter.xml" />
   <xi:include href="builders/trivial-builders.chapter.xml" />
+  <xi:include href="builders/testers.chapter.xml" />
   <xi:include href="builders/special.xml" />
   <xi:include href="builders/images.xml" />
+  <xi:include href="hooks/index.xml" />
   <xi:include href="languages-frameworks/index.xml" />
   <xi:include href="builders/packages/index.xml" />
  </part>
diff --git a/doc/stdenv/cross-compilation.chapter.md b/doc/stdenv/cross-compilation.chapter.md
index f6e61a1af1962..3b6e5c34d54da 100644
--- a/doc/stdenv/cross-compilation.chapter.md
+++ b/doc/stdenv/cross-compilation.chapter.md
@@ -78,21 +78,46 @@ If both the dependency and depending packages aren't compilers or other machine-
 
 Finally, if the depending package is a compiler or other machine-code-producing tool, it might need dependencies that run at "emit time". This is for compilers that (regrettably) insist on being built together with their source languages' standard libraries. Assuming build != host != target, a run-time dependency of the standard library cannot be run at the compiler's build time or run time, but only at the run time of code emitted by the compiler.
 
-Putting this all together, that means we have dependencies in the form "host → target", in at most the following six combinations:
+Putting this all together, that means that we have dependency types of the form "X→ E", which means that the dependency executes on X and emits code for E; each of X and E can be `build`, `host`, or `target`, and E can be `*` to indicate that the dependency is not a compiler-like package.
+
+Dependency types describe the relationships that a package has with each of its transitive dependencies.  You could think of attaching one or more dependency types to each of the formal parameters at the top of a package's `.nix` file, as well as to all of *their* formal parameters, and so on.   Triples like `(foo, bar, baz)`, on the other hand, are a property of an instantiated derivation -- you could would attach a triple `(mips-linux, mips-linux, sparc-solaris)` to a `.drv` file in `/nix/store`.
+
+Only nine dependency types matter in practice:
 
 #### Possible dependency types {#possible-dependency-types}
 
-| Dependency’s host platform | Dependency’s target platform |
-|----------------------------|------------------------------|
-| build                      | build                        |
-| build                      | host                         |
-| build                      | target                       |
-| host                       | host                         |
-| host                       | target                       |
-| target                     | target                       |
+| Dependency type | Dependency’s host platform | Dependency’s target platform |
+|-----------------|----------------------------|------------------------------|
+| build → *       | build                      | (none)                       |
+| build → build   | build                      | build                        |
+| build → host    | build                      | host                         |
+| build → target  | build                      | target                       |
+| host → *        | host                       | (none)                       |
+| host → host     | host                       | host                         |
+| host → target   | host                       | target                       |
+| target → *      | target                     | (none)                       |
+| target → target | target                     | target                       |
+
+Let's use `g++` as an example to make this table clearer.  `g++` is a C++ compiler written in C.  Suppose we are building `g++` with a `(build, host, target)` platform triple of `(foo, bar, baz)`.  This means we are using a `foo`-machine to build a copy of `g++` which will run on a `bar`-machine and emit binaries for the `baz`-machine.
+
+* `g++` links against the host platform's `glibc` C library, which is a "host→ *" dependency with a triple of `(bar, bar, *)`.  Since it is a library, not a compiler, it has no "target".
+
+* Since `g++` is written in C, the `gcc` compiler used to compile it is a "build→ host" dependency of `g++` with a triple of `(foo, foo, bar)`.  This compiler runs on the build platform and emits code for the host platform.
+
+* `gcc` links against the build platform's `glibc` C library, which is a "build→ *" dependency with a triple of `(foo, foo, *)`.  Since it is a library, not a compiler, it has no "target".
+
+* This `gcc` is itself compiled by an *earlier* copy of `gcc`.  This earlier copy of `gcc` is a "build→ build" dependency of `g++` with a triple of `(foo, foo, foo)`.  This "early `gcc`" runs on the build platform and emits code for the build platform.
+
+* `g++` is bundled with `libgcc`, which includes a collection of target-machine routines for exception handling and
+software floating point emulation.  `libgcc` would be a "target→ *" dependency with triple `(foo, baz, *)`, because it consists of machine code which gets linked against the output of the compiler that we are building.  It is a library, not a compiler, so it has no target of its own.
+
+* `libgcc` is written in C and compiled with `gcc`.  The `gcc` that compiles it will be a "build→ target" dependency with triple `(foo, foo, baz)`.  It gets compiled *and run* at `g++`-build-time (on platform `foo`), but must emit code for the `baz`-platform.
+
+* `g++` allows inline assembler code, so it depends on access to a copy of the `gas` assembler.  This would be a "host→ target" dependency with triple `(foo, bar, baz)`.
 
+* `g++` (and `gcc`) include a library `libgccjit.so`, which wrap the compiler in a library to create a just-in-time compiler.  In nixpkgs, this library is in the `libgccjit` package; if C++ required that programs have access to a JIT, `g++` would need to add a "target→ target" dependency for `libgccjit` with triple `(foo, baz, baz)`.  This would ensure that the compiler ships with a copy of `libgccjit` which both executes on and generates code for the `baz`-platform.
 
-Some examples will make this table clearer. Suppose there's some package that is being built with a `(build, host, target)` platform triple of `(foo, bar, baz)`. If it has a build-time library dependency, that would be a "host → build" dependency with a triple of `(foo, foo, *)` (the target platform is irrelevant). If it needs a compiler to be built, that would be a "build → host" dependency with a triple of `(foo, foo, *)` (the target platform is irrelevant). That compiler, would be built with another compiler, also "build → host" dependency, with a triple of `(foo, foo, foo)`.
+* If `g++` itself linked against `libgccjit.so` (for example, to allow compile-time-evaluated C++ expressions), then the `libgccjit` package used to provide this functionality would be a "host→ host" dependency of `g++`: it is code which runs on the `host` and emits code for execution on the `host`.
 
 ### Cross packaging cookbook {#ssec-cross-cookbook}
 
diff --git a/doc/stdenv/meta.chapter.md b/doc/stdenv/meta.chapter.md
index d3e1dd5b27d82..475006b1259b6 100644
--- a/doc/stdenv/meta.chapter.md
+++ b/doc/stdenv/meta.chapter.md
@@ -175,6 +175,40 @@ The NixOS tests are available as `nixosTests` in parameters of derivations. For
 
 NixOS tests run in a VM, so they are slower than regular package tests. For more information see [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
 
+Alternatively, you can specify other derivations as tests. You can make use of
+the optional parameter to inject the correct package without
+relying on non-local definitions, even in the presence of `overrideAttrs`.
+Here that's `finalAttrs.finalPackage`, but you could choose a different name if
+`finalAttrs` already exists in your scope.
+
+`(mypkg.overrideAttrs f).passthru.tests` will be as expected, as long as the
+definition of `tests` does not rely on the original `mypkg` or overrides it in
+all places.
+
+```nix
+# my-package/default.nix
+{ stdenv, callPackage }:
+stdenv.mkDerivation (finalAttrs: {
+  # ...
+  passthru.tests.example = callPackage ./example.nix { my-package = finalAttrs.finalPackage; };
+})
+```
+
+```nix
+# my-package/example.nix
+{ runCommand, lib, my-package, ... }:
+runCommand "my-package-test" {
+  nativeBuildInputs = [ my-package ];
+  src = lib.sources.sourcesByRegex ./. [ ".*.in" ".*.expected" ];
+} ''
+  my-package --help
+  my-package <example.in >example.actual
+  diff -U3 --color=auto example.expected example.actual
+  mkdir $out
+''
+```
+
+
 ### `timeout` {#var-meta-timeout}
 
 A timeout (in seconds) for building the derivation. If the derivation takes longer than this time to build, it can fail due to breaking the timeout. However, all computers do not have the same computing power, hence some builders may decide to apply a multiplicative factor to this value. When filling this value in, try to keep it approximately consistent with other values already present in `nixpkgs`.
@@ -215,3 +249,31 @@ Unfree package that cannot be redistributed. You can build it yourself, but you
 ### `lib.licenses.unfreeRedistributableFirmware`, `"unfree-redistributable-firmware"` {#lib.licenses.unfreeredistributablefirmware-unfree-redistributable-firmware}
 
 This package supplies unfree, redistributable firmware. This is a separate value from `unfree-redistributable` because not everybody cares whether firmware is free.
+
+## Source provenance {#sec-meta-sourceProvenance}
+
+The value of a package's `meta.sourceProvenance` attribute specifies the provenance of the package's derivation outputs.
+
+If a package contains elements that are not built from the original source by a nixpkgs derivation, the `meta.sourceProvenance` attribute should be a list containing one or more value from `lib.sourceTypes` defined in [`nixpkgs/lib/source-types.nix`](https://github.com/NixOS/nixpkgs/blob/master/lib/source-types.nix).
+
+Adding this information helps users who have needs related to build transparency and supply-chain security to gain some visibility into their installed software or set policy to allow or disallow installation based on source provenance.
+
+The presence of a particular `sourceType` in a package's `meta.sourceProvenance` list indicates that the package contains some components falling into that category, though the *absence* of that `sourceType` does not *guarantee* the absence of that category of `sourceType` in the package's contents. A package with no `meta.sourceProvenance` set implies it has no *known* `sourceType`s other than `fromSource`.
+
+The meaning of the `meta.sourceProvenance` attribute does not depend on the value of the `meta.license` attribute.
+
+### `lib.sourceTypes.fromSource` {#lib.sourceTypes.fromSource}
+
+Package elements which are produced by a nixpkgs derivation which builds them from source code.
+
+### `lib.sourceTypes.binaryNativeCode` {#lib.sourceTypes.binaryNativeCode}
+
+Native code to be executed on the target system's CPU, built by a third party. This includes packages which wrap a downloaded AppImage or Debian package.
+
+### `lib.sourceTypes.binaryFirmware` {#lib.sourceTypes.binaryFirmware}
+
+Code to be executed on a peripheral device or embedded controller, built by a third party.
+
+### `lib.sourceTypes.binaryBytecode` {#lib.sourceTypes.binaryBytecode}
+
+Code to run on a VM interpreter or JIT compiled into bytecode by a third party. This includes packages which download Java `.jar` files from another source.
diff --git a/doc/stdenv/multiple-output.chapter.md b/doc/stdenv/multiple-output.chapter.md
index 62bf543e51e55..65156816b9919 100644
--- a/doc/stdenv/multiple-output.chapter.md
+++ b/doc/stdenv/multiple-output.chapter.md
@@ -77,7 +77,7 @@ There is a special handling of the `debug` output, described at [](#stdenv-separ
 
 A commonly adopted convention in `nixpkgs` is that executables provided by the package are contained within its first output. This convention allows the dependent packages to reference the executables provided by packages in a uniform manner. For instance, provided with the knowledge that the `perl` package contains a `perl` executable it can be referenced as `${pkgs.perl}/bin/perl` within a Nix derivation that needs to execute a Perl script.
 
-The `glibc` package is a deliberate single exception to the “binaries first” convention. The `glibc` has `libs` as its first output allowing the libraries provided by `glibc` to be referenced directly (e.g. `${stdenv.glibc}/lib/ld-linux-x86-64.so.2`). The executables provided by `glibc` can be accessed via its `bin` attribute (e.g. `${stdenv.glibc.bin}/bin/ldd`).
+The `glibc` package is a deliberate single exception to the “binaries first” convention. The `glibc` has `libs` as its first output allowing the libraries provided by `glibc` to be referenced directly (e.g. `${glibc}/lib/ld-linux-x86-64.so.2`). The executables provided by `glibc` can be accessed via its `bin` attribute (e.g. `${lib.getBin stdenv.cc.libc}/bin/ldd`).
 
 The reason for why `glibc` deviates from the convention is because referencing a library provided by `glibc` is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of `glibc` libraries from Nix packages (please see the documentation on [patchelf](https://github.com/NixOS/patchelf) for more details).
 
diff --git a/doc/stdenv/stdenv.chapter.md b/doc/stdenv/stdenv.chapter.md
index 1d4ca99e3cbe5..b57698cb90b34 100644
--- a/doc/stdenv/stdenv.chapter.md
+++ b/doc/stdenv/stdenv.chapter.md
@@ -125,7 +125,7 @@ The extension of `PATH` with dependencies, alluded to above, proceeds according
 A dependency is said to be **propagated** when some of its other-transitive (non-immediate) downstream dependencies also need it as an immediate dependency.
 [^footnote-stdenv-propagated-dependencies]
 
-It is important to note that dependencies are not necessarily propagated as the same sort of dependency that they were before, but rather as the corresponding sort so that the platform rules still line up. To determine the exact rules for dependency propagation, we start by assigning to each dependency a couple of ternary numbers (`-1` for `build`, `0` for `host`, and `1` for `target`), representing how respectively its host and target platforms are "offset" from the depending derivation’s platforms. The following table summarize the different combinations that can be obtained:
+It is important to note that dependencies are not necessarily propagated as the same sort of dependency that they were before, but rather as the corresponding sort so that the platform rules still line up. To determine the exact rules for dependency propagation, we start by assigning to each dependency a couple of ternary numbers (`-1` for `build`, `0` for `host`, and `1` for `target`) representing its [dependency type](#possible-dependency-types), which captures how its host and target platforms are each "offset" from the depending derivation’s host and target platforms. The following table summarize the different combinations that can be obtained:
 
 | `host → target`     | attribute name      | offset   |
 | ------------------- | ------------------- | -------- |
@@ -317,6 +317,60 @@ The script will be usually run from the root of the Nixpkgs repository but you s
 
 For information about how to run the updates, execute `nix-shell maintainers/scripts/update.nix`.
 
+### Recursive attributes in `mkDerivation`
+
+If you pass a function to `mkDerivation`, it will receive as its argument the final arguments, including the overrides when reinvoked via `overrideAttrs`. For example:
+
+```nix
+mkDerivation (finalAttrs: {
+  pname = "hello";
+  withFeature = true;
+  configureFlags =
+    lib.optionals finalAttrs.withFeature ["--with-feature"];
+})
+```
+
+Note that this does not use the `rec` keyword to reuse `withFeature` in `configureFlags`.
+The `rec` keyword works at the syntax level and is unaware of overriding.
+
+Instead, the definition references `finalAttrs`, allowing users to change `withFeature`
+consistently with `overrideAttrs`.
+
+`finalAttrs` also contains the attribute `finalPackage`, which includes the output paths, etc.
+
+Let's look at a more elaborate example to understand the differences between
+various bindings:
+
+```nix
+# `pkg` is the _original_ definition (for illustration purposes)
+let pkg =
+  mkDerivation (finalAttrs: {
+    # ...
+
+    # An example attribute
+    packages = [];
+
+    # `passthru.tests` is a commonly defined attribute.
+    passthru.tests.simple = f finalAttrs.finalPackage;
+
+    # An example of an attribute containing a function
+    passthru.appendPackages = packages':
+      finalAttrs.finalPackage.overrideAttrs (newSelf: super: {
+        packages = super.packages ++ packages';
+      });
+
+    # For illustration purposes; referenced as
+    # `(pkg.overrideAttrs(x)).finalAttrs` etc in the text below.
+    passthru.finalAttrs = finalAttrs;
+    passthru.original = pkg;
+  });
+in pkg
+```
+
+Unlike the `pkg` binding in the above example, the `finalAttrs` parameter always references the final attributes. For instance `(pkg.overrideAttrs(x)).finalAttrs.finalPackage` is identical to `pkg.overrideAttrs(x)`, whereas `(pkg.overrideAttrs(x)).original` is the same as the original `pkg`.
+
+See also the section about [`passthru.tests`](#var-meta-tests).
+
 ## Phases {#sec-stdenv-phases}
 
 `stdenv.mkDerivation` sets the Nix [derivation](https://nixos.org/manual/nix/stable/expressions/derivations.html#derivations)'s builder to a script that loads the stdenv `setup.sh` bash library and calls `genericBuild`. Most packaging functions rely on this default builder.
@@ -815,7 +869,7 @@ makeWrapper $out/bin/foo $wrapperfile --set FOOBAR baz
 makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello git ]}
 ```
 
-There’s many more kinds of arguments, they are documented in `nixpkgs/pkgs/build-support/setup-hooks/make-wrapper.sh` for the `makeWrapper` implementation and in `nixpkgs/pkgs/build-support/setup-hooks/make-binary-wrapper.sh` for the `makeBinaryWrapper` implementation.
+There’s many more kinds of arguments, they are documented in `nixpkgs/pkgs/build-support/setup-hooks/make-wrapper.sh` for the `makeWrapper` implementation and in `nixpkgs/pkgs/build-support/setup-hooks/make-binary-wrapper/make-binary-wrapper.sh` for the `makeBinaryWrapper` implementation.
 
 `wrapProgram` is a convenience function you probably want to use most of the time, implemented by both `makeWrapper` and `makeBinaryWrapper`.
 
@@ -1043,7 +1097,7 @@ You can also specify a `runtimeDependencies` variable which lists dependencies t
 
 In certain situations you may want to run the main command (`autoPatchelf`) of the setup hook on a file or a set of directories instead of unconditionally patching all outputs. This can be done by setting the `dontAutoPatchelf` environment variable to a non-empty value.
 
-By default `autoPatchelf` will fail as soon as any ELF file requires a dependency which cannot be resolved via the given build inputs. In some situations you might prefer to just leave missing dependencies unpatched and continue to patch the rest. This can be achieved by setting the `autoPatchelfIgnoreMissingDeps` environment variable to a non-empty value.
+By default `autoPatchelf` will fail as soon as any ELF file requires a dependency which cannot be resolved via the given build inputs. In some situations you might prefer to just leave missing dependencies unpatched and continue to patch the rest. This can be achieved by setting the `autoPatchelfIgnoreMissingDeps` environment variable to a non-empty value. `autoPatchelfIgnoreMissingDeps` can be set to a list like `autoPatchelfIgnoreMissingDeps = [ "libcuda.so.1" "libcudart.so.1" ];` or to simply `[ "*" ]` to ignore all missing dependencies.
 
 The `autoPatchelf` command also recognizes a `--no-recurse` command line flag, which prevents it from recursing into subdirectories.
 
diff --git a/doc/using/configuration.chapter.md b/doc/using/configuration.chapter.md
index 932b24237c02e..2445aa32f2a7f 100644
--- a/doc/using/configuration.chapter.md
+++ b/doc/using/configuration.chapter.md
@@ -176,6 +176,15 @@ You can define a function called `packageOverrides` in your local `~/.config/nix
 }
 ```
 
+## `config` Options Reference {#sec-config-options-reference}
+
+The following attributes can be passed in [`config`](#chap-packageconfig).
+
+```{=docbook}
+<include xmlns="http://www.w3.org/2001/XInclude" href="../doc-support/result/config-options.docbook.xml"/>
+```
+
+
 ## Declarative Package Management {#sec-declarative-package-management}
 
 ### Build an environment {#sec-building-environment}
diff --git a/doc/using/overlays.chapter.md b/doc/using/overlays.chapter.md
index df152bc14e7b8..a51aa9ee8fc54 100644
--- a/doc/using/overlays.chapter.md
+++ b/doc/using/overlays.chapter.md
@@ -77,7 +77,7 @@ In Nixpkgs, we have multiple implementations of the BLAS/LAPACK numerical linear
 
     The Nixpkgs attribute is `openblas` for ILP64 (integer width = 64 bits) and `openblasCompat` for LP64 (integer width = 32 bits).  `openblasCompat` is the default.
 
--   [LAPACK reference](http://www.netlib.org/lapack/) (also provides BLAS)
+-   [LAPACK reference](http://www.netlib.org/lapack/) (also provides BLAS and CBLAS)
 
     The Nixpkgs attribute is `lapack-reference`.
 
@@ -117,7 +117,23 @@ $ LD_LIBRARY_PATH=$(nix-build -A mkl)/lib${LD_LIBRARY_PATH:+:}$LD_LIBRARY_PATH n
 
 Intel MKL requires an `openmp` implementation when running with multiple processors. By default, `mkl` will use Intel's `iomp` implementation if no other is specified, but this is a runtime-only dependency and binary compatible with the LLVM implementation. To use that one instead, Intel recommends users set it with `LD_PRELOAD`. Note that `mkl` is only available on `x86_64-linux` and `x86_64-darwin`. Moreover, Hydra is not building and distributing pre-compiled binaries using it.
 
-For BLAS/LAPACK switching to work correctly, all packages must depend on `blas` or `lapack`. This ensures that only one BLAS/LAPACK library is used at one time. There are two versions of BLAS/LAPACK currently in the wild, `LP64` (integer size = 32 bits) and `ILP64` (integer size = 64 bits). Some software needs special flags or patches to work with `ILP64`. You can check if `ILP64` is used in Nixpkgs with `blas.isILP64` and `lapack.isILP64`. Some software does NOT work with `ILP64`, and derivations need to specify an assertion to prevent this. You can prevent `ILP64` from being used with the following:
+To override `blas` and `lapack` with its reference implementations (i.e. for development purposes), one can use the following overlay:
+
+```nix
+self: super:
+
+{
+  blas = super.blas.override {
+    blasProvider = self.lapack-reference;
+  };
+
+  lapack = super.lapack.override {
+    lapackProvider = self.lapack-reference;
+  };
+}
+```
+
+For BLAS/LAPACK switching to work correctly, all packages must depend on `blas` or `lapack`. This ensures that only one BLAS/LAPACK library is used at one time. There are two versions of BLAS/LAPACK currently in the wild, `LP64` (integer size = 32 bits) and `ILP64` (integer size = 64 bits). The attributes `blas` and `lapack` are `LP64` by default. Their `ILP64` version are provided through the attributes `blas-ilp64` and `lapack-ilp64`. Some software needs special flags or patches to work with `ILP64`. You can check if `ILP64` is used in Nixpkgs with `blas.isILP64` and `lapack.isILP64`. Some software does NOT work with `ILP64`, and derivations need to specify an assertion to prevent this. You can prevent `ILP64` from being used with the following:
 
 ```nix
 { stdenv, blas, lapack, ... }:
diff --git a/doc/using/overrides.chapter.md b/doc/using/overrides.chapter.md
index 66e5103531a9a..a97a39354a9d8 100644
--- a/doc/using/overrides.chapter.md
+++ b/doc/using/overrides.chapter.md
@@ -39,14 +39,18 @@ The function `overrideAttrs` allows overriding the attribute set passed to a `st
 Example usage:
 
 ```nix
-helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
+helloWithDebug = pkgs.hello.overrideAttrs (finalAttrs: previousAttrs: {
   separateDebugInfo = true;
 });
 ```
 
 In the above example, the `separateDebugInfo` attribute is overridden to be true, thus building debug info for `helloWithDebug`, while all other attributes will be retained from the original `hello` package.
 
-The argument `oldAttrs` is conventionally used to refer to the attr set originally passed to `stdenv.mkDerivation`.
+The argument `previousAttrs` is conventionally used to refer to the attr set originally passed to `stdenv.mkDerivation`.
+
+The argument `finalAttrs` refers to the final attributes passed to `mkDerivation`, plus the `finalPackage` attribute which is equal to the result of `mkDerivation` or subsequent `overrideAttrs` calls.
+
+If only a one-argument function is written, the argument has the meaning of `previousAttrs`.
 
 ::: {.note}
 Note that `separateDebugInfo` is processed only by the `stdenv.mkDerivation` function, not the generated, raw Nix derivation. Thus, using `overrideDerivation` will not work in this case, as it overrides only the attributes of the final derivation. It is for this reason that `overrideAttrs` should be preferred in (almost) all cases to `overrideDerivation`, i.e. to allow using `stdenv.mkDerivation` to process input arguments, as well as the fact that it is easier to use (you can use the same attribute names you see in your Nix code, instead of the ones generated (e.g. `buildInputs` vs `nativeBuildInputs`), and it involves less typing).