'\" t .\" Man page generated from reStructuredText. . . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .TH "MONGOC_REFERENCE" "3" "Feb 25, 2024" "1.26.0" "libmongoc" .SH LIBMONGOC .sp A Cross Platform MongoDB Client Library for C .SS Introduction .sp The MongoDB C Driver, also known as \(dqlibmongoc\(dq, is a library for using MongoDB from C applications, and for writing MongoDB drivers in higher\-level languages. .sp It depends on \fI\%libbson\fP to generate and parse BSON documents, the native data format of MongoDB. .SS Tutorials .sp This section contains tutorials on how to get started with the basics of using the C driver. .SS Obtaining the MongoDB C Driver Libraries .sp There are a few methods of obtaining the \fBmongo\-c\-driver\fP codebase: .SS Building the C Driver Libraries from Source .sp This page details how to download, unpack, configure, and build \fBlibbson\fP and \fBlibmongoc\fP from their original source\-code form. .sp Extra information .sp Dropdowns (like this one) contain extra information and explanatory details that are not required to complete the tutorial, but may be helpful for curious readers, and more advanced users that want an explanation of the meaning of certain tutorial steps. .sp The following page uses a few named \(dqvariables\(dq that you must decide up\-front. When you see such a value referrenced in a tutorial step, you should substitute the value into that step. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 Before building, you may want to check that you are running on a supported platform. For the list of supported platforms, refer to the \fI\%mongo\-c\-driver Platform Support\fP page. .UNINDENT .UNINDENT .SS Choose a Version .sp Before we begin, know what version of \fBmongo\-c\-driver\fP you will be downloading. A list of available versions can be found on \fI\%the GitHub repository tags page\fP\&. (The current version written for this documentation is \fB1.26.0\fP\&.) .sp For the remainder of this page, \fB$VERSION\fP will refer to the version number of \fBmongo\-c\-driver\fP that you will be building for this tutorial. .SS Obtaining the Source .sp There are two primary recommended methods of obtaining the \fBmongo\-c\-driver\fP source code: .INDENT 0.0 .IP 1. 3 Clone the repository using \fBgit\fP (recommended). \fI\%(See below)\fP .IP 2. 3 Download a source archive at a specific version. \fI\%(See below)\fP .UNINDENT .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 It is \fBhighly recommended\fP that new users use a stable released version of the driver, rather than building from a development branch. When you \fBgit clone\fP or download an archive of the repository, be sure to specify a release tag (e.g. with Git\(aqs \fB\-\-branch\fP argument). .UNINDENT .UNINDENT .SS Downloading Using Git .sp Using Git, the C driver repository can be cloned from the GitHub URL \fI\%https://github.com/mongodb/mongo\-c\-driver.git\fP\&. Git tags for released versions are named after the version for which they correspond (e.g. \(dq\fB1.26.0\fP\(dq). To clone the repository using the command line, the following command may be used: .INDENT 0.0 .INDENT 3.5 .sp .EX $ git clone https://github.com/mongodb/mongo\-c\-driver.git \-\-branch=\(dq$VERSION\(dq \(dq$SOURCE\(dq .EE .UNINDENT .UNINDENT .sp \fBTIP:\fP .INDENT 0.0 .INDENT 3.5 Despite the name, \fBgit\-clone\fP\(aqs \fB\-\-branch\fP argument may also be used to clone from repository \fItags\fP\&. .UNINDENT .UNINDENT .SS Downloading a Release Archive .sp An archived snapshot of the repository can be obtained from the \fI\%GitHub Releases Page\fP\&. The \fBmongo\-c\-driver\-x.y.z.tar.gz\fP archive attached to any release contains the minimal set of files that you\(aqll need for the build. .sp Using \fBwget\fP + \fBtar\fP .INDENT 0.0 .INDENT 3.5 .sp .EX ## Download using wget: $ wget \(dqhttps://github.com/mongodb/mongo\-c\-driver/archive/refs/tags/$VERSION.tar.gz\(dq \e \-\-output\-document=\(dqmongo\-c\-driver\-$VERSION.tar.gz\(dq ## Extract using tar: $ tar xf \(dqmongo\-c\-driver\-$VERSION.tar.gz\(dq .EE .UNINDENT .UNINDENT .sp Using \fBcurl\fP + \fBtar\fP .INDENT 0.0 .INDENT 3.5 .sp .EX ## Using curl: $ curl \(dqhttps://github.com/mongodb/mongo\-c\-driver/archive/refs/tags/$VERSION.tar.gz\(dq \e \-\-output=\(dqmongo\-c\-driver\-$VERSION.tar.gz\(dq ## Extract using tar: $ tar xf \(dqmongo\-c\-driver\-$VERSION.tar.gz\(dq .EE .UNINDENT .UNINDENT .sp PowerShell .INDENT 0.0 .INDENT 3.5 .sp .EX ## Use Invoke\-WebRequest: PS> $url = \(dqhttps://github.com/mongodb/mongo\-c\-driver/archive/refs/tags/$VERSION.zip\(dq PS> $file = \(dqmongo\-c\-driver\-$VERSION.zip\(dq PS> Invoke\-WebRequest \-UseBasicParsing \-Uri $url \-OutFile $file ## Extract using Expand\-Archive: PS> Expand\-Archive mongo\-c\-driver\-$VERSION.zip .EE .UNINDENT .UNINDENT .sp The above commands will create a new directory \fBmongo\-c\-driver\-$VERSION\fP within the directory in which you ran the \fBtar\fP/\fBExpand\-Archive\fP command (\fBnote\fP: PowerShell will create an additional intermediate subdirectory of the same name). This directory is the root of the driver source tree (which we refer to as \fB$SOURCE\fP in these documents). The \fB$SOURCE\fP directory should contain the top\-level \fBCMakeLists.txt\fP file. .SS Obtaining Prerequisites .sp In order to build the project, a few prerequisites need to be available. .sp Both \fBlibmongoc\fP and \fBlibbson\fP projects use \fI\%CMake\fP for build configuration. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 It is \fIhighly recommended\fP \-\- but not \fIrequired\fP \-\- that you download the latest stable CMake available for your platform. .UNINDENT .UNINDENT .sp Getting the Latest CMake .sp A new stable release of CMake can be obtained from \fI\%the CMake downloads page\fP\&. .sp For Windows and macOS, simply download the CMake \fB\&.msi\fP/\fB\&.dmg\fP (not the \fB\&.zip\fP/\fB\&.tar.gz\fP) and use it to install CMake. .sp On Linux, download the self\-extracting shell script (ending with \fB\&.sh\fP) and execute it using the \fBsh\fP utility, passing the appropriate arguments to perform the install. For example, with the CMake 3.27.0 on the \fBx86_64\fP platform, the following command can be used on the \fBcmake\-3.27.0\-linux\-x86_64.sh\fP script: .INDENT 0.0 .INDENT 3.5 .sp .EX $ sh cmake\-3.27.0\-linux\-x86_64.sh \-\-prefix=\(dq$HOME/.local\(dq \-\-exclude\-subdir \-\-skip\-license .EE .UNINDENT .UNINDENT .sp Assuming that \fB$HOME/.local/bin\fP is on your \fB$PATH\fP list, the \fBcmake\fP command for 3.27.0 will then become available. .sp The \fB\-\-help\fP option can be passed to the shell script for more information. .sp For the remainder of this page, it will be assumed that \fBcmake\fP is available as a command on your \fBPATH\fP environment variable and can be executed as \(dq\fBcmake\fP\(dq from a shell. You can test this by requesting the \fB\-\-version\fP from CMake from the command line: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake \-\-version cmake version 3.21.4 CMake suite maintained and supported by Kitware (kitware.com/cmake). .EE .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 If you intend to build \fBlibbson\fP \fIonly\fP, then CMake is sufficient for the build. Additional C driver features may require additional external dependencies be installed, but we will not worry about them here. .UNINDENT .UNINDENT .SS Configuring for \fBlibbson\fP .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 If you are building with Xcode [1] or Visual Studio [2], you may need to execute CMake from within a special environment in which the respective toolchain is available. .UNINDENT .UNINDENT .sp Let the name \fB$BUILD\fP be the path \fB$SOURCE/_build\fP\&. This will be the directory where our built files will be written by CMake. .sp With the source directory for \fBmongo\-c\-driver\fP at \fB$SOURCE\fP and build directory \fB$BUILD\fP, the following command can be executed from a command\-line to configure the project with both \fBlibbson\fP and \fBlibmongoc\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake \-S $SOURCE \-B $BUILD \e \-D ENABLE_EXTRA_ALIGNMENT=OFF \e \-D ENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF \e \-D CMAKE_BUILD_TYPE=RelWithDebInfo \e \-D BUILD_VERSION=\(dq$VERSION\(dq \e \-D ENABLE_MONGOC=OFF .EE .UNINDENT .UNINDENT .sp If all dependencies are satisfied, the above command should succeed and end with: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake … ## … (Lines of output) … \-\- Generating done \-\- Build files have been written to: $BUILD .EE .UNINDENT .UNINDENT .sp If configuration failed with an error, refer to the CMake output for error messages and information. Ensure that configuration succeeds before proceeding. .sp What do these CMake arguments mean? .sp The \fBBUILD_VERSION\fP sets the version number that will be included in the build results. This should be set to the same value as the version of the source driver that was downloaded in \fI\%Obtaining the Source\fP\&. .sp The \fBENABLE_EXTRA_ALIGNMENT\fP and \fBENABLE_AUTOMATIC_INIT_AND_CLEANUP\fP are part of \fBmongo\-c\-driver\fP, and correspond to deprecated features that are only enabled by default for ABI compatibility purposes. It is highly recommended to disable these features whenever possible. .sp The \fBENABLE_MONGOC=OFF\fP argument disabled building \fBlibmongoc\fP\&. We\(aqll build that in the next section. .sp The \fI\%CMAKE_BUILD_TYPE\fP setting tells CMake what variant of code will be generated. In the case of \fBRelWithDebInfo\fP, optimized binaries will be produced, but still include debug information. The \fI\%CMAKE_BUILD_TYPE\fP has no effect on Multi\-Config generators (i.e. Visual Studio), which instead rely on the \fB\-\-config\fP option when building/installing. .SS Building the Project .sp After successfully configuring the project, the build can be executed by using CMake: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake \-\-build $BUILD \-\-config RelWithDebInfo \-\-parallel .EE .UNINDENT .UNINDENT .sp If configured properly and all dependencies are satisfied, then the above command should proceed to compile and link the configured components. If the above command fails, then there is likely an error with your environment, or you are using an unsupported/untested platform. Refer to the build tool output for more information. .sp The \fB\-\-config\fP option .sp The \fI\%\-\-config\fP option is used to set the build configuration to use in the case of Multi\-Config generators (i.e. Visual Studio). It has no effect on other generators, which instead use \fI\%CMAKE_BUILD_TYPE\fP\&. .SS Installing the Built Results .sp Let \fB$PREFIX\fP be the path \fB$SOURCE/_install\fP\&. We can use CMake to install the built results: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake \-\-install \(dq$BUILD\(dq \-\-prefix \(dq$PREFIX\(dq \-\-config RelWithDebInfo .EE .UNINDENT .UNINDENT .sp This command will install the \fBmongo\-c\-driver\fP build results into the \fB$PREFIX\fP directory. .sp The \fB\-\-config\fP option .sp The \fI\%\-\-config\fP option is only used for Multi\-Config generators (i.e. Visual Studio) and is otherwise ignored. The value given for \fB\-\-config\fP must be the same as was given for \fI\%\-\-config\fP with \fBcmake \-\-build\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 The above snippet simply installs \fBmongo\-c\-driver\fP in a subdirectory of the source directory itself, but this is not a normal workflow. Once you feel compfortable about configuring and building \fBmongo\-c\-driver\fP, the page \fI\%How to: Install libbson/libmongoc from Source\fP will do a deeper dive on from\-source installation options. .UNINDENT .UNINDENT .SS Configuring with \fBlibmongoc\fP .sp If you followed the above steps starting from \fI\%Configuring for libbson\fP, your final result with only contain \fBlibbson\fP and not the full C database driver library. Building of \fBlibmongoc\fP is enabled/disabled using the \fBENABLE_MONGOC\fP CMake variable. Re\-run CMake again, but set \fBENABLE_MONGOC\fP to \fBTRUE\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake \-D ENABLE_MONGOC=ON $BUILD .EE .UNINDENT .UNINDENT .sp If the above command succeeds, then the project has been reconfigured to build with \fBlibmongoc\fP\&. Follow the process at \fI\%Building the Project\fP and \fI\%Installing the Built Results\fP again to build and install \fBlibmongoc\fP\&. .SH FOOTNOTES .IP [1] 5 If you wish to configure and build the project with Xcode, the Xcode command\-line tools need to be installed and made available in the environment. From within a command\-line environment, run: .INDENT 0.0 .INDENT 3.5 .sp .EX $ xcode\-select \-\-install .EE .UNINDENT .UNINDENT .sp This will ensure that the compilers and linkers are available on your \fB$PATH\fP\&. .IP [2] 5 If you wish to configure and build the project using Microsoft Visual C++, then the Visual C++ tools and environment variables may need to be set when running any CMake or build command. .sp In many cases, CMake will detect a Visual Studio installation and automatically load the environment itself when it is executed. This automatic detection can be controlled with CMake\(aqs \fI\%\-G\fP, \fI\%\-T\fP, and \fI\%\-A\fP options. The \fB\-G\fP option is the most significant, as it selects which Visual Studio version will be used. The versions of Visual Studio supported depends on the version of CMake that you have installed. \fI\%A list of supported Visual Studio versions can be found here\fP .sp For greater control and more tooling options, it is recommended to run commands from within a Visual Studio \fIDeveloper PowerShell\fP (preferred) or \fIDeveloper Command Prompt\fP (legacy). .sp For more information, refer to: \fI\%Visual Studio Developer Command Prompt and Developer PowerShell\fP and \fI\%Use the Microsoft C++ toolset from the command line\fP on the Microsoft Visual Studio documentation pages. .SS Installing Prebuilt MongoDB C Driver Libraries .sp The \fBlibmongoc\fP and \fBlibbson\fP libraries are often available in the package management repositories of \fI\%common Linux distributions\fP and \fI\%macOS via Homebrew\fP\&. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 For Windows, it is recommended to instead \fI\%build the libraries from source\fP, for maximum compatibility with the local toolchain. Building from source can be automated by using a from\-source library package management tool such as \fI\%Conan\fP or \fI\%vcpkg\fP (See: \fI\%Cross Platform Installs Using Library Package Managers\fP). .UNINDENT .UNINDENT .sp \fBCAUTION:\fP .INDENT 0.0 .INDENT 3.5 If you install and use prebuilt binaries from a third\-party packager, it is possible that it lags behind the version of the libraries described in these documentation pages (1.26.0). Note the version that you install and keep it in mind when reading these pages. .sp For the most up\-to\-date versions of the C driver libraries, prefer to instead \fI\%build from source\fP\&. .UNINDENT .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 For a listing and common reference on available packages, refer to \fI\%Package Installation Reference\fP\&. .UNINDENT .UNINDENT .SS Cross Platform Installs Using Library Package Managers .sp Various library package managers offer \fBlibbson\fP and \fBlibmongoc\fP as installable packages, including \fI\%Conan\fP and \fI\%vcpkg\fP\&. This section will detail how to install using those tools. .SS Installing using vcpkg .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 This page will not detail how to get started using \fI\%vcpkg\fP\&. For that, refer to \fI\%Get started with vcpkg\fP .UNINDENT .UNINDENT .sp vcpkg Manifest Mode (Recommended) In \fI\%vcpkg manifest mode\fP, add the desired libraries to your project\(aqs \fBvcpkg.json\fP manifest file: .sp \fBvcpkg.json\fP .INDENT 0.0 .INDENT 3.5 .sp .EX { // ... \(dqdependencies\(dq: [ // ... \(dqmongo\-c\-driver\(dq ] } .EE .UNINDENT .UNINDENT .sp When you build a CMake project with vcpkg integration and have a \fBvcpkg.json\fP manifest file, vcpkg will automatically install the project\(aqs dependencies before proceeding with the configuration phase, so no additional manual work is required. .sp vcpkg Classic Mode In \fI\%vcpkg classic mode\fP, \fBlibbson\fP and \fBlibmongoc\fP can be installed through the names \fBlibbson\fP and \fBmongo\-c\-driver\fP, respectively: .INDENT 0.0 .INDENT 3.5 .sp .EX $ vcpkg install mongo\-c\-driver .EE .UNINDENT .UNINDENT .sp (Installing \fBmongo\-c\-driver\fP will transitively install \fBlibbson\fP as well.) .sp When the \fBlibmongoc\fP and \fBlibbson\fP packages are installed and vcpkg has been properly integrated into your build system, the desired libraries will be available for import. .sp With CMake, the standard config\-file package will be available, as well as the generated \fBIMPORTED\fP targets: .sp \fBCMakeLists.txt\fP .INDENT 0.0 .INDENT 3.5 .sp .EX find_package(mongoc\-1.0 CONFIG REQUIRED) target_link_libraries(my\-application PRIVATE $,mongo::mongoc_shared,mongo::mongoc_static>) .EE .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The large \fB$:...>\fP generator expression (\fI\%cmake\-generator\-expressions(7)\fP) can be used to switch the link type of \fBlibmongoc\fP based on whichever form is available from the \fBfind_package()\fP command. \fBlibmongoc\fP supports building with both \fIdynamic\fP and \fIstatic\fP library types, but vcpkg will only install one of the two library types at a time. .UNINDENT .UNINDENT .sp Configuring a CMake project with vcpkg integration is a matter of setting the CMake toolchain file at the initial configure command: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake \-S . \-B _build \-D CMAKE_TOOLCHAIN_FILE=$VCPKG_ROOT/scripts/buildsystems/vcpkg.cmake .EE .UNINDENT .UNINDENT .SS Installing in Linux .sp The names and process of installing \fBlibbson\fP and \fBlibmongoc\fP varies between distributions, but generally follows a similar pattern. .sp The following Linux distributions provide \fBlibbson\fP and \fBlibmongoc\fP packages: .INDENT 0.0 .IP \(bu 2 \fI\%Fedora\fP via \fBdnf\fP .IP \(bu 2 \fI\%RedHat Enterprise Linux (RHEL) 7 and Newer\fP and distribusions based on RHEL 7 or newer, including \fI\%CentOS, Rocky Linux, and AlmaLinux\fP, via \fByum\fP/\fBdnf\fP and \fI\%EPEL\fP\&. .IP \(bu 2 \fI\%Debian\fP and Debian\-based distributions, including \fI\%Ubuntu\fP and Ubuntu derivatives, via APT. .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 For a list of available packages and package options, see: \fI\%Package Installation Reference\fP\&. .UNINDENT .UNINDENT .SS RedHat\-based Systems .sp In RedHat\-based Linux distributions, including \fBFedora\fP, \fBCentOS\fP, \fBRocky Linux\fP, and \fBAlmaLinux\fP, the C driver libraries can be installed with Yum/DNF. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 For Fedora and enterprise Linux of version 8 or greater, it is recommended to use the \fBdnf\fP command in place of any \fByum\fP command. .UNINDENT .UNINDENT .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 \fBExcept for Fedora:\fP .sp The C driver libraries are only available in version 7 and newer of the respective enterprise Linux distributions. However, the C driver libraries are not available in the default repositories, but can be obtained by enabling the \fI\%EPEL\fP repositories. This can be done by installing the \fBepel\-release\fP package: .INDENT 0.0 .INDENT 3.5 .sp .EX # yum install epel\-release .EE .UNINDENT .UNINDENT .sp \fBepel\-release\fP must be installed before attempting to install the C driver libraries (i.e. one cannot install them both in a single \fByum install\fP command). .UNINDENT .UNINDENT .sp To install \fBlibbson\fP only, install the \fBlibbson\-devel\fP package: .INDENT 0.0 .INDENT 3.5 .sp .EX # yum install libbson\-devel .EE .UNINDENT .UNINDENT .sp To install the full C database driver (\fBlibmongoc\fP), install \fBmongo\-c\-driver\-devel\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX ## (This package will transitively install libbson\-devel) # yum install mongo\-c\-driver\-devel .EE .UNINDENT .UNINDENT .sp To check which version is available, see \fI\%https://packages.fedoraproject.org/pkgs/mongo\-c\-driver/mongo\-c\-driver\-devel\fP\&. .sp The development packages (ending in \fB\-devel\fP) include files required to build applications using \fBlibbson\fP and \fBlibmongoc\fP\&. To only install the libraries without development files, install the \fBlibbson\fP or \fBmongo\-c\-driver\-libs\fP packages. .SS Debian\-based Systems .sp In Debian\-based Linux distributions, including Ubuntu and Ubuntu derivatives, \fBlibbson\fP and \fBlibmongoc\fP are available in the distribution repositories via APT, and can be installed as \fBlibbson\-dev\fP and \fBlibmongoc\-dev\fP, respectively: .INDENT 0.0 .INDENT 3.5 .sp .EX ## Update repository information, if necessary: # apt update .EE .UNINDENT .UNINDENT .sp To install only \fBlibbson\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX # apt install libbson\-dev .EE .UNINDENT .UNINDENT .sp To install \fBlibmongoc\fP (which will also install \fBlibbson\fP): .INDENT 0.0 .INDENT 3.5 .sp .EX # apt install libmongoc\-dev .EE .UNINDENT .UNINDENT .sp To check which version is available, run \fBapt\-cache policy libmongoc\-dev\fP\&. .sp The development packages (ending in \fB\-dev\fP) include files required to build applications using \fBlibbson\fP and \fBlibmongoc\fP\&. To only install the libraries without development files, install the \fBlibbson\-1.0\-0\fP or \fBlibmongoc\-1.0\-0\fP packages. .SS Installing on macOS with Homebrew .sp If you are using a macOS system, the C driver libraries (including both \fBlibmongoc\fP and \fBlibbson\fP) may be installed using the \fI\%Homebrew\fP package manager [1] with the following command: .INDENT 0.0 .INDENT 3.5 .sp .EX $ brew install mongo\-c\-driver .EE .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Homebrew does not provide separate packages for \fBlibbson\fP and \fBlibmongoc\fP\&. .UNINDENT .UNINDENT .IP [1] 5 The \fI\%Homebrew\fP package manager is not installed by default on macOS. For information on installing Homebrew, refer to \fI\%the Homebrew installation documentation page\fP\&. .SS Building the \fBmongo\-c\-driver\fP Documentation Pages .sp This documentation is rendered using \fI\%Sphinx\fP\&. To easily ensure that all tooling matches expected versions, it is recommended to use \fI\%Poetry\fP to install and run the required tools. .sp \fBTIP:\fP .INDENT 0.0 .INDENT 3.5 Poetry itself may be installed externally, but can also be automatically managed using the included wrapping scripts for Bash (At \fBtools/poetry.sh\fP) or PowerShell (at \fBtools/poetry.ps1\fP). These scripts can stand in for \fBpoetry\fP in any command written below. .UNINDENT .UNINDENT .SS Setting Up the Environment .sp To install the required tooling, use the \fBpoetry install\fP command, enabling documentation dependencies: .INDENT 0.0 .INDENT 3.5 .sp .EX $ poetry install \-\-with=docs .EE .UNINDENT .UNINDENT .sp This will create a user\-local Python virtualenv that contains the necessary tools for building this documentation. The \fBpoetry install\fP command only needs to be run when the \fBpyproject.toml\fP file is changed. .SS Running Sphinx .sp Poetry can be used to execute the \fI\%sphinx\-build\fP command: .INDENT 0.0 .INDENT 3.5 .sp .EX $ poetry run sphinx\-build \-b dirhtml \(dq./src/libmongoc/doc\(dq \(dq./_build/docs/html\(dq .EE .UNINDENT .UNINDENT .sp This command will generate the HTML documentation in the \fB_build/docs/html\fP subdirectory. .sp \fBTIP:\fP .INDENT 0.0 .INDENT 3.5 Because Sphinx builds many pages, the build may run quite slowly. For faster builds, it is recommended to use the \fI\%\-\-jobs\fP command\-line option when invoking \fI\%sphinx\-build\fP\&. .UNINDENT .UNINDENT .SS Viewing the Documentation .sp To quickly view the rendered HTML pages, a simple local HTTP server can be spawned on the command line by using Python\(aqs built\-in \fI\%http.server\fP module: .INDENT 0.0 .INDENT 3.5 .sp .EX $ poetry run python \-m http.server \-\-directory=_build/docs/html .EE .UNINDENT .UNINDENT .sp By default, this will serve the documentation at \fI\%http://127.0.0.1:8000\fP, which you can open in any web browser to see the rendered pages. .SS How\-To Guides .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 The \(dqcookbook\(dq is for problem\-solving, and deeper dives into approach particular tasks. It may assume some prior knowledge, and these are not tutorials themselves! For learning the basics, see the \fI\%Tutorials\fP section. .UNINDENT .UNINDENT .SS How to: Install \fBlibbson\fP/\fBlibmongoc\fP from Source .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 This page assumes that you can successfully configure and build the components that you wish to install, which is detailed and explained on the \fI\%Building the C Driver Libraries from Source\fP tutorial page. Whereas that tutorial walks through getting the sources built and a minimal install working, this page will offer deeper guidance on the nuance and available options for installing the \fBmongo\-c\-driver\fP libraries from such a from\-source build. .UNINDENT .UNINDENT .sp \fBmongo\-c\-driver\fP uses CMake to generate its installation rules, and installs a variety of artifacts of interest. For integration with downstream programs, the \fI\%Config\-file Packages\fP and \fI\%pkg\-config\fP files would be of particular interest. .sp If you are intending to import \fBlibbson\fP or \fBlibmongoc\fP via CMake or pkg\-config, it can be helpful to be aware of how the respective tool searches for package metadata. .sp CMake Package Lookup CMake builds a set of search paths based on a set of prefixes, which are read from both the environment and from configure\-time CMake settings. .sp In particular, the \fB$PATH\fP environment variable will be used to construct the standard prefixes for the system. For each directory D in \fB$PATH\fP: .INDENT 0.0 .IP 1. 3 If the final path component of D is \(dq\fBbin\fP\(dq or \(dq\fBsbin\fP\(dq, D is replaced with the parent path of D\&. .IP 2. 3 D is added as a search prefix. .UNINDENT .sp This has the effect that common Unix\-specific directories on \fB$PATH\fP, such as \fB/usr/bin\fP and \fB/usr/local/bin\fP will end up causing CMake to search in \fB/usr\fP and \fB/usr/local\fP is prefixes, respectively. If you have the directory \fB$HOME/.local/bin\fP on your \fB$PATH\fP, then the \fB$HOME/.local\fP directory will also be added to the search path. Having \fB$HOME/.local/bin\fP on \fB$PATH\fP is an increasingly common pattern for many Unix shells, and is recommended if you intend to do use a \fI\%per\-user\-prefix\fP for your installion. .sp Additionally, the \fI\%CMAKE_PREFIX_PATH\fP \fIenvironment variable\fP will be used to construct a list of paths. By default, this environment variable is not defined. .sp \fBOn Windows\fP, the directories \fB%ProgramW6432%\fP, \fB%ProgramFiles%\fP, \fB%ProgramFiles(x86)%\fP, \fB%SystemDrive%\eProgram Files\fP, and \fB%SystemDrive%\eProgram Files (x86)\fP will also be added. (These come from the \fI\%CMAKE_SYSTEM_PREFIX_PATH\fP CMake variable, which is defined during CMake\(aqs platform detection.) .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 For detailed information on package lookup, refer to CMake\(aqs \fI\%Config Mode Search Procedure\fP section for full details. .UNINDENT .UNINDENT .sp pkg\-config Package Lookup The \fBpkg\-config\fP command\-line tool looks for \fB\&.pc\fP files in various directories, by default relative to the path of the \fBpkg\-config\fP tool itself. To get the list of directories that your \fBpkg\-config\fP will search by default, use the following command: .sp Ask \fBpkg\-config\fP what directories it will search by default .INDENT 0.0 .INDENT 3.5 .sp .EX $ pkg\-config \(dqpkg\-config\(dq \-\-variable=\(dqpc_path\(dq .EE .UNINDENT .UNINDENT .sp Additional directories can be specified using the \fB$PKG_CONFIG_PATH\fP environment variable. Such paths will be searched \fIbefore\fP the default \fBpkg\-config\fP paths. .sp \fBOn Windows\fP, registry keys \fBHKCU\eSoftware\epkgconfig\ePKG_CONFIG_PATH\fP and \fBHKLM\eSoftware\epkgconfig\ePKG_CONFIG_PATH\fP can be used to specify additional search directories for \fBpkg\-config\fP\&. Adding directories to the \fBHKCU\e…\fP key is recommended for persisting user\-specific search directories. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 If you have \fBman\fP and \fBpkg\-config\fP installed on your system, lookup procedures are detailed in \fBman 1 pkg\-config\fP\&. This documentation may also be found at many man page archives on the web, such as \fI\%pkg\-config at linux.die.net\fP\&. .UNINDENT .UNINDENT .SS Choosing a Prefix .sp We will call the directory for the user\-local installation \fB$PREFIX\fP\&. Selecting the path to this directory is somewhat arbitrary, but there are some recommendations to consider. The \fB$PREFIX\fP directory is the path that you will give to CMake or \fBpkg\-config\fP when configuring a downstream project. .SS Using an Unprivileged User\-Local Install Prefix (Recommended) .sp It is recommended that you install custom\-built \fBmongo\-c\-driver\fP libraries in an unprivileged filesystem location particular to the user account. .sp macOS Unlike other Unix\-like systems, macOS does not have a specific directory for user\-local package installations, and it is up to the individual to create such directories themselves. .sp The choice of directory to use is essentially arbitrary. For per\-user installations, the only requirement is that the directory be writeable by the user that wishes to perform and use the installation. .sp For the purposes of uniformity with other Unix variants, this guide will lightly recommend using \fB$HOME/.local\fP as a user\-local installation prefix. This is based on the behavior specified by the \fI\%XDG base directory\fP specifications and the \fI\%systemd file\-hierarchy\fP common on Linux and various BSDs, but it is not a standard on other platforms. .sp Linux & Other Unixes On Linux and BSD systems, it is common to use the \fB$HOME/.local\fP directory as the prefix for user\-specific package installations. This convention originates in the \fI\%XDG base directory\fP specification and the \fI\%systemd file\-hierarchy\fP .sp Because of its wide\-spread use and support in many other tools, this guide recommends using \fB$HOME/.local\fP as a user\-local installation prefix. .sp Windows On Windows, there exists a dedicated directory for user\-local files in \fB%UserProfile%\eAppData\eLocal\fP\&. To reference it, expand the \fB%LocalAppData%\fP environment variable. (\fBDo not\fP use the \fB%AppData%\fP environment variable!) .sp Despite this directory existing, it has no prescribed structure that suits our purposes. The choice of user\-local installation prefix is arbitrary. This guide \fIstrongly discourages\fP creating additional files and directories directly within the user\(aqs home directory. .sp Consider using \fB%LocalAppData%\eMongoDB\fP as a prefix for the purposes of manually installed components. .SS Selecting a System\-Wide Installation Prefix .sp If you wish to install the \fBmongo\-c\-driver\fP libraries in a directory that is visible to all users, there are a few standard options. .sp Linux, macOS, BSD, or Other Unix Using an install \fB$PREFIX\fP of \fB/usr/local/\fP is the primary recommendation for all Unix platforms, but this may vary on some obscure systems. .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 \fBDO NOT\fP use \fB/usr/\fP nor \fB/\fP (the root directory) as a prefix: These directories are designed to be carefully managed by the system. The \fB/usr/local\fP directory is intentionally reserved for the purpose of unmanaged software installation. .UNINDENT .UNINDENT .sp Alternatively, consider installing to a distinct directory that can be easily removed or relocated, such as \fB/opt/mongo\-c\-driver/\fP\&. This will be easily identifiable and not interact with other software on the system without explicitly opting\-in. .sp Windows .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 It is \fBstrongly discouraged\fP to manually install software system\-wide on Windows. Prefer instead to \fI\%use a per\-user unprivileged installation prefix\fP\&. .UNINDENT .UNINDENT .sp If you wish to perform a system\-wide installation on Windows, prefer to use a named subdirectory of \fB%ProgramData%\fP, which does not require administrative privileges to read and write. (e.g. \fB%ProgramData%\emongo\-c\-driver\fP) .SS Installing with CMake .sp After you have successfully configured and built the libraries and have selected a suitable \fB$PREFIX\fP, you can install the built results. Let the name \fB$BUILD\fP refer to the directory where you executed the build (this is the directory that contains \fBCMakeCache.txt\fP, among many other files). .sp From a command line, the installation into your chosen \fB$PREFIX\fP can be run via CMake using the \fI\%cmake \-\-install subcommand\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cmake \-\-install \(dq$BUILD\(dq \-\-prefix \(dq$PREFIX\(dq .EE .UNINDENT .UNINDENT .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 If you configured the libraries while using a \fImulti\-config generator\fP (e.g Visual Studio, Xcode), then you will also need to pass the \fI\%\-\-config\fP command\-line option, and must pass the value for the build configuration that you wish to install. For any chosen value of \fB\-\-config\fP used for installation, you must also have previously executed a \fI\%cmake \-\-build\fP within that directory with that same \fB\-\-config\fP value. .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 If you chose to use a system\-wide installation \fB$PREFIX\fP, it is possible that you will need to execute the installation as a privileged user. If you \fIcannot run\fP or \fIdo not want to run\fP the installation as a privileged user, you should instead \fI\%use a per\-user installation prefix\fP\&. .UNINDENT .UNINDENT .sp \fBHINT:\fP .INDENT 0.0 .INDENT 3.5 It is not necessary to set a \fI\%CMAKE_INSTALL_PREFIX\fP if you use the \fI\%\-\-prefix\fP command\-line option with \fBcmake \-\-install\fP\&. The \fB\-\-prefix\fP option will override whatever was specified by \fI\%CMAKE_INSTALL_PREFIX\fP when the project was configured. .UNINDENT .UNINDENT .SS Reference .SS Package Installation Reference .sp \fBlibbson\fP and \fBlibmongoc\fP are available from several package management tools on a variety of systems. .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 The third\-party packages detailed here are not directly controlled via the \fBmongo\-c\-driver\fP maintainers, and the information found here may be incomplete or out\-of\-date. .UNINDENT .UNINDENT .SS Package Names and Availability .sp This table details the names and usage notes of such packages. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The development packages (ending in \fB\-dev\fP or \fB\-devel\fP) include files required to build applications using \fBlibbson\fP and \fBlibmongoc\fP\&. .UNINDENT .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 For a step\-by\-step tutorial on installing packages, refer to \fI\%Installing Prebuilt MongoDB C Driver Libraries\fP\&. .UNINDENT .UNINDENT .TS center; |l|l|l|l|l|. _ T{ Packaging Tool T} T{ Platform(s) T} T{ \fBlibbson\fP package(s) T} T{ \fBlibmongoc\fP package(s) T} T{ Notes T} _ T{ APT (\fBapt\fP/\fBapt\-get\fP) T} T{ Debian\-based Linux distributions (Debian, Ubuntu, Linux Mint, etc.) T} T{ \fBlibbson\-1.0\-0\fP, \fBlibbson\-dev\fP, \fBlibbson\-doc\fP T} T{ \fBlibmongoc\-1.0\-0\fP, \fBlibmongoc\-dev\fP, \fBlibmongoc\-doc\fP T} T{ T} _ T{ Yum/DNF T} T{ RHEL\-based systems (RHEL, Fedora, CentOS, Rocky Linux, AlmaLinux) T} T{ \fBlibbson\fP, \fBlibbson\-devel\fP T} T{ \fBmongo\-c\-driver\-libs\fP, \fBmongo\-c\-driver\-devel\fP T} T{ \fIExcept on Fedora\fP the \fI\%EPEL\fP repositories must be enabled (i.e. install the \fBepel\-release\fP package first) T} _ T{ APK T} T{ Alpine Linux T} T{ \fBlibbson\fP, \fBlibbson\-dev\fP, \fBlibbson\-static\fP T} T{ \fBmongo\-c\-driver\fP, \fBmongo\-c\-driver\-dev\fP, \fBmongo\-c\-driver\-static\fP T} T{ T} _ T{ pacman T} T{ Arch Linux T} T{ \fBmongo\-c\-driver\fP T} T{ \fBmongo\-c\-driver\fP T} T{ A single package provides both runtime and development support for both \fBlibbson\fP and \fBlibmongoc\fP T} _ T{ Homebrew T} T{ macOS T} T{ \fBmongo\-c\-driver\fP T} T{ \fBmongo\-c\-driver\fP T} T{ T} _ T{ Conan T} T{ Cross\-platform T} T{ \fBmongo\-c\-driver\fP T} T{ \fBmongo\-c\-driver\fP T} T{ See: \fI\%Conan Settings and Features\fP T} _ T{ vcpkg T} T{ Cross\-platform T} T{ \fBlibbson\fP T} T{ \fBmongo\-c\-driver\fP T} T{ See: \fI\%vcpkg Optional Features\fP T} _ .TE .SS Conan Settings and Features .sp The \fBmongo\-c\-driver\fP \fI\%Conan\fP recipe includes several build settings that correspond to the configure\-time build settings available when building the \fBmongo\-c\-driver\fP project. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 \fI\%The mongo\-c\-driver Conan recipe (includes libbson)\fP .UNINDENT .UNINDENT .TS center; |l|l|l|l|. _ T{ Setting T} T{ Options T} T{ Default T} T{ Notes T} _ T{ \fBshared\fP T} T{ (Boolean) T} T{ \fBFalse\fP T} T{ Build the shared library instead of the static library T} _ T{ \fBfPIC\fP T} T{ (Boolean) T} T{ \fBTrue\fP T} T{ Compile code as position\-independent T} _ T{ \fBsrv\fP T} T{ (Boolean) T} T{ \fBTrue\fP T} T{ Enables MongoDB SRV URI support T} _ T{ \fBwith_ssl\fP T} T{ \fBopenssl\fP, \fBlibressl\fP, \fBwindows\fP, \fBdarwin\fP, \fBFalse\fP T} T{ \fBopenssl\fP [1] T} T{ Select a TLS backend. Setting to \(dq\fBFalse\fP\(dq disables TLS support. T} _ T{ \fBwith_sasl\fP T} T{ \fBsspi\fP, \fBcyrus\fP, \fBFalse\fP T} T{ \fBsspi\fP on Windows, \fBFalse\fP elsewhere T} T{ Enable \fI\%SASL authentication\fP support T} _ T{ \fBwith_snappy\fP T} T{ (Boolean) T} T{ \fBTrue\fP T} T{ Enable \fI\%Snappy\fP compression T} _ T{ \fBwith_zlib\fP T} T{ (Boolean) T} T{ \fBTrue\fP T} T{ Enable \fI\%Zlib\fP compression T} _ T{ \fBwith_zstd\fP T} T{ (Boolean) T} T{ \fBTrue\fP T} T{ Enable \fI\%Zstd\fP compression T} _ .TE .IP [1] 5 Conan will use OpenSSL as the default TLS backend, even on platforms that ship with their own TLS implementation (e.g. Windows and macOS). This behavior differs from the upstream default\-configured \fBlibmongoc\fP or the vcpkg distribution of \fBmongo\-c\-driver\fP, which both default to use the TLS implementation preferred for the target platform. .SS vcpkg Optional Features .sp The \fBmongo\-c\-driver\fP package offered by \fI\%vcpkg\fP supports several optional features. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .IP \(bu 2 \fI\%The vcpkg libbson port\fP .IP \(bu 2 \fI\%The vcpkg mongo\-c\-driver port\fP .UNINDENT .UNINDENT .UNINDENT .TS center; |l|l|. _ T{ Feature T} T{ Notes T} _ T{ \fBicu\fP T} T{ Installs the ICU library, which is necessary for non\-ASCII usernames and passwords in pre\-1.25 \fBlibmongoc\fP T} _ T{ \fBopenssl\fP T} T{ Use OpenSSL for encryption, even on Windows and Apple platforms which provide a native TLS backend. .sp If omitted, the default will be to use the preferred TLS implementation for the system. T} _ T{ \fBsnappy\fP T} T{ Enable the \fI\%Snappy\fP compression backend T} _ T{ \fBzstd\fP T} T{ Enable the \fI\%Zstd\fP compression backend T} _ .TE .SS \fBmongo\-c\-driver\fP Platform Support .sp This page documents information about the target platforms and toolchains that are supported by the \fBmongo\-c\-driver\fP libraries. .SS Operating Systems .sp The following operating systems are continually tested with \fBmongo\-c\-driver\fP: .TS center; |l|l|. _ T{ Operating System T} T{ Notes T} _ T{ Debian T} T{ Versions \fB9.2\fP, \fB10.0\fP, and \fB11.0\fP T} _ T{ RHEL T} T{ Versions \fB7.0\fP, \fB7.1\fP, \fB8.1\fP, \fB8.2\fP, and \fB8.3\fP\&. RHEL derivatives (e.g. CentOS, Rocky Linux, AlmaLinux) of the same release version are supported. Fedora is also supported, but not continually tested. T} _ T{ Ubuntu T} T{ Versions \fB16.04\fP, \fB18.04\fP, and \fB20.04\fP\&. Subsequent minor releases are also supported. Ubuntu \fB22.04\fP and newer is not yet tested. Ubuntu derivatives based on supported Ubuntu versions are also supported. T} _ T{ Arch Linux T} T{ T} _ T{ macOS T} T{ Version \fB11.0\fP T} _ T{ Windows Server 2008 and Windows Server 2016 T} T{ Windows variants of the same generation are supported T} _ .TE .SS Compilers .sp The following compilers are continually tested for \fBmongo\-c\-driver\fP: .TS center; |l|l|. _ T{ Compiler T} T{ Notes T} _ T{ Clang T} T{ Versions \fB3.7\fP, \fB3.8\fP, and \fB6.0\fP\&. Newer versions are also supported, as well as the corresponding Apple Clang releases. T} _ T{ GCC T} T{ Versions \fB4.8\fP, \fB5.4\fP, \fB6.3\fP, \fB7.5\fP, \fB8.2\fP, \fB8.3\fP, \fB9.4\fP, and \fB10.2\fP\&. The MinGW\-w64 GCC is also tested and supported. T} _ T{ Microsoft Visual C++ (MSVC) T} T{ Tested with MSVC \fB12.x\fP (Visual Studio \fB2013\fP), \fB14.x\fP (Visual Studio \fB2015\fP), and \fB15.x\fP (Visual Studio \fB2017\fP). Newer MSVC versions are supported but not yet tested. T} _ .TE .SS Architectures .sp The following CPU architectures are continually tested for \fBmongo\-c\-driver\fP: .TS center; |l|l|. _ T{ Architecture T} T{ Notes T} _ T{ x86 (32\-bit) T} T{ Only tested on Windows T} _ T{ x86_64 (64\-bit x86) T} T{ Tested on Linux, macOS, and Windows T} _ T{ ARM / aarch64 T} T{ Tested on macOS and Linux T} _ T{ Power8 (ppc64le) T} T{ Only tested on Linux T} _ T{ zSeries (s390x) T} T{ Only tested on Linux T} _ .TE .SS Others .sp Other platforms and toolchains are not tested, but similar versions of the above platforms \fIshould work\fP\&. If you encounter a platform or toolchain that you expect to work and find that it does not, please open an issue describing the problem, and/or open a \fI\%GitHub Pull Request\fP to fix it. .sp Simple pull requests to fix unsupported platforms are welcome, but will be considered on a case\-by\-case basis. The acceptance of a pull request to fix the libraries on an unsupported platform does not imply full support of that platform. .SS Tutorial .sp This guide offers a brief introduction to the MongoDB C Driver. .sp For more information on the C API, please refer to the \fI\%API Reference\fP\&. .SS Installing .sp For detailed instructions on installing the MongoDB C Driver on a particular platform, please see the \fI\%installation guide\fP\&. .SS Starting MongoDB .sp To run the examples in this tutorial, MongoDB must be installed and running on \fBlocalhost\fP on the default port, 27017. To check if it is up and running, connect to it with the MongoDB shell. .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongosh \-\-host localhost \-\-port 27017 \-\-quiet Enterprise rs0 [direct: primary] test> db.version() 7.0.0 > .EE .UNINDENT .UNINDENT .SS Include and link libmongoc in your C program .SS Include mongoc.h .sp All libmongoc\(aqs functions and types are available in one header file. Simply include \fBmongoc/mongoc.h\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX #include .EE .UNINDENT .UNINDENT .SS CMake .sp The libmongoc installation includes a \fI\%CMake config\-file package\fP, so you can use CMake\(aqs \fI\%find_package\fP command to import libmongoc\(aqs CMake target and link to libmongoc (as a shared library): .sp CMakeLists.txt .INDENT 0.0 .INDENT 3.5 .sp .EX # Specify the minimum version you require. find_package (mongoc\-1.0 1.7 REQUIRED) # The \(dqhello_mongoc.c\(dq sample program is shared among four tests. add_executable (hello_mongoc ../../hello_mongoc.c) target_link_libraries (hello_mongoc PRIVATE mongo::mongoc_shared) .EE .UNINDENT .UNINDENT .sp You can also use libmongoc as a static library instead: Use the \fBmongo::mongoc_static\fP CMake target: .INDENT 0.0 .INDENT 3.5 .sp .EX # Specify the minimum version you require. find_package (mongoc\-1.0 1.7 REQUIRED) # The \(dqhello_mongoc.c\(dq sample program is shared among four tests. add_executable (hello_mongoc ../../hello_mongoc.c) target_link_libraries (hello_mongoc PRIVATE mongo::mongoc_static) .EE .UNINDENT .UNINDENT .SS pkg\-config .sp If you\(aqre not using CMake, use \fI\%pkg\-config\fP on the command line to set header and library paths: .INDENT 0.0 .INDENT 3.5 .sp .EX gcc \-o hello_mongoc hello_mongoc.c $(pkg\-config \-\-libs \-\-cflags libmongoc\-1.0) .EE .UNINDENT .UNINDENT .sp Or to statically link to libmongoc: .INDENT 0.0 .INDENT 3.5 .sp .EX gcc \-o hello_mongoc hello_mongoc.c $(pkg\-config \-\-libs \-\-cflags libmongoc\-static\-1.0) .EE .UNINDENT .UNINDENT .SS Specifying header and include paths manually .sp If you aren\(aqt using CMake or pkg\-config, paths and libraries can be managed manually. .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o hello_mongoc hello_mongoc.c \e \-I/usr/local/include/libbson\-1.0 \-I/usr/local/include/libmongoc\-1.0 \e \-lmongoc\-1.0 \-lbson\-1.0 $ ./hello_mongoc { \(dqok\(dq : 1.000000 } .EE .UNINDENT .UNINDENT .sp For Windows users, the code can be compiled and run with the following commands. (This assumes that the MongoDB C Driver has been installed to \fBC:\emongo\-c\-driver\fP; change the include directory as needed.) .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 hello_mongoc.c C:\e> hello_mongoc { \(dqok\(dq : 1.000000 } .EE .UNINDENT .UNINDENT .SS Use libmongoc in a Microsoft Visual Studio Project .sp See the \fI\%libmongoc and Visual Studio guide\fP\&. .SS Making a Connection .sp Access MongoDB with a \fI\%mongoc_client_t\fP\&. It transparently connects to standalone servers, replica sets and sharded clusters on demand. To perform operations on a database or collection, create a \fI\%mongoc_database_t\fP or \fI\%mongoc_collection_t\fP struct from the \fI\%mongoc_client_t\fP\&. .sp At the start of an application, call \fI\%mongoc_init()\fP before any other libmongoc functions. At the end, call the appropriate destroy function for each collection, database, or client handle, in reverse order from how they were constructed. Call \fI\%mongoc_cleanup()\fP before exiting. .sp The example below establishes a connection to a standalone server on \fBlocalhost\fP, registers the client application as \(dqconnect\-example,\(dq and performs a simple command. .sp More information about database operations can be found in the \fI\%CRUD Operations\fP and \fI\%Executing Commands\fP sections. Examples of connecting to replica sets and sharded clusters can be found in the \fI\%Advanced Connections\fP page, while examples of data compression can be found in the \fI\%Data Compression\fP page. .sp hello_mongoc.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include int main (int argc, char *argv[]) { const char *uri_string = \(dqmongodb://localhost:27017\(dq; mongoc_uri_t *uri; mongoc_client_t *client; mongoc_database_t *database; mongoc_collection_t *collection; bson_t *command, reply, *insert; bson_error_t error; char *str; bool retval; /* * Required to initialize libmongoc\(aqs internals */ mongoc_init (); /* * Optionally get MongoDB URI from command line */ if (argc > 1) { uri_string = argv[1]; } /* * Safely create a MongoDB URI object from the given string */ uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } /* * Create a new client instance */ client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } /* * Register the application name so we can track it in the profile logs * on the server. This can also be done from the URI (see other examples). */ mongoc_client_set_appname (client, \(dqconnect\-example\(dq); /* * Get a handle on the database \(dqdb_name\(dq and collection \(dqcoll_name\(dq */ database = mongoc_client_get_database (client, \(dqdb_name\(dq); collection = mongoc_client_get_collection (client, \(dqdb_name\(dq, \(dqcoll_name\(dq); /* * Do work. This example pings the database, prints the result as JSON and * performs an insert */ command = BCON_NEW (\(dqping\(dq, BCON_INT32 (1)); retval = mongoc_client_command_simple ( client, \(dqadmin\(dq, command, NULL, &reply, &error); if (!retval) { fprintf (stderr, \(dq%s\en\(dq, error.message); return EXIT_FAILURE; } str = bson_as_json (&reply, NULL); printf (\(dq%s\en\(dq, str); insert = BCON_NEW (\(dqhello\(dq, BCON_UTF8 (\(dqworld\(dq)); if (!mongoc_collection_insert_one (collection, insert, NULL, NULL, &error)) { fprintf (stderr, \(dq%s\en\(dq, error.message); } bson_destroy (insert); bson_destroy (&reply); bson_destroy (command); bson_free (str); /* * Release our handles and clean up libmongoc */ mongoc_collection_destroy (collection); mongoc_database_destroy (database); mongoc_uri_destroy (uri); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .SS Creating BSON Documents .sp Documents are stored in MongoDB\(aqs data format, BSON. The C driver uses \fI\%libbson\fP to create BSON documents. There are several ways to construct them: appending key\-value pairs, using BCON, or parsing JSON. .SS Appending BSON .sp A BSON document, represented as a \fI\%bson_t\fP in code, can be constructed one field at a time using libbson\(aqs append functions. .sp For example, to create a document like this: .INDENT 0.0 .INDENT 3.5 .sp .EX { born : ISODate(\(dq1906\-12\-09\(dq), died : ISODate(\(dq1992\-01\-01\(dq), name : { first : \(dqGrace\(dq, last : \(dqHopper\(dq }, languages : [ \(dqMATH\-MATIC\(dq, \(dqFLOW\-MATIC\(dq, \(dqCOBOL\(dq ], degrees: [ { degree: \(dqBA\(dq, school: \(dqVassar\(dq }, { degree: \(dqPhD\(dq, school: \(dqYale\(dq } ] } .EE .UNINDENT .UNINDENT .sp Use the following code: .INDENT 0.0 .INDENT 3.5 .sp .EX #include int main (int argc, char *argv[]) { struct tm born = {0}; struct tm died = {0}; const char *lang_names[] = {\(dqMATH\-MATIC\(dq, \(dqFLOW\-MATIC\(dq, \(dqCOBOL\(dq}; const char *schools[] = {\(dqVassar\(dq, \(dqYale\(dq}; const char *degrees[] = {\(dqBA\(dq, \(dqPhD\(dq}; uint32_t i; bson_t *document; bson_t child; bson_array_builder_t *bab; char *str; document = bson_new (); /* * Append { \(dqborn\(dq : ISODate(\(dq1906\-12\-09\(dq) } to the document. * Passing \-1 for the length argument tells libbson to calculate the * string length. */ born.tm_year = 6; /* years are 1900\-based */ born.tm_mon = 11; /* months are 0\-based */ born.tm_mday = 9; bson_append_date_time (document, \(dqborn\(dq, \-1, mktime (&born) * 1000); /* * Append { \(dqdied\(dq : ISODate(\(dq1992\-01\-01\(dq) } to the document. */ died.tm_year = 92; died.tm_mon = 0; died.tm_mday = 1; /* * For convenience, this macro passes length \-1 by default. */ BSON_APPEND_DATE_TIME (document, \(dqdied\(dq, mktime (&died) * 1000); /* * Append a subdocument. */ BSON_APPEND_DOCUMENT_BEGIN (document, \(dqname\(dq, &child); BSON_APPEND_UTF8 (&child, \(dqfirst\(dq, \(dqGrace\(dq); BSON_APPEND_UTF8 (&child, \(dqlast\(dq, \(dqHopper\(dq); bson_append_document_end (document, &child); /* * Append array of strings. Generate keys \(dq0\(dq, \(dq1\(dq, \(dq2\(dq. */ BSON_APPEND_ARRAY_BUILDER_BEGIN (document, \(dqlanguages\(dq, &bab); for (i = 0; i < sizeof lang_names / sizeof (char *); ++i) { bson_array_builder_append_utf8 (bab, lang_names[i], \-1); } bson_append_array_builder_end (document, bab); /* * Array of subdocuments: * degrees: [ { degree: \(dqBA\(dq, school: \(dqVassar\(dq }, ... ] */ BSON_APPEND_ARRAY_BUILDER_BEGIN (document, \(dqdegrees\(dq, &bab); for (i = 0; i < sizeof degrees / sizeof (char *); ++i) { bson_array_builder_append_document_begin (bab, &child); BSON_APPEND_UTF8 (&child, \(dqdegree\(dq, degrees[i]); BSON_APPEND_UTF8 (&child, \(dqschool\(dq, schools[i]); bson_array_builder_append_document_end (bab, &child); } bson_append_array_builder_end (document, bab); /* * Print the document as a JSON string. */ str = bson_as_canonical_extended_json (document, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); /* * Clean up allocated bson documents. */ bson_destroy (document); return 0; } .EE .UNINDENT .UNINDENT .sp See the \fI\%libbson documentation\fP for all of the types that can be appended to a \fI\%bson_t\fP\&. .SS Using BCON .sp \fIBSON C Object Notation\fP, BCON for short, is an alternative way of constructing BSON documents in a manner closer to the intended format. It has less type\-safety than BSON\(aqs append functions but results in less code. .INDENT 0.0 .INDENT 3.5 .sp .EX #include int main (int argc, char *argv[]) { struct tm born = { 0 }; struct tm died = { 0 }; bson_t *document; char *str; born.tm_year = 6; born.tm_mon = 11; born.tm_mday = 9; died.tm_year = 92; died.tm_mon = 0; died.tm_mday = 1; document = BCON_NEW ( \(dqborn\(dq, BCON_DATE_TIME (mktime (&born) * 1000), \(dqdied\(dq, BCON_DATE_TIME (mktime (&died) * 1000), \(dqname\(dq, \(dq{\(dq, \(dqfirst\(dq, BCON_UTF8 (\(dqGrace\(dq), \(dqlast\(dq, BCON_UTF8 (\(dqHopper\(dq), \(dq}\(dq, \(dqlanguages\(dq, \(dq[\(dq, BCON_UTF8 (\(dqMATH\-MATIC\(dq), BCON_UTF8 (\(dqFLOW\-MATIC\(dq), BCON_UTF8 (\(dqCOBOL\(dq), \(dq]\(dq, \(dqdegrees\(dq, \(dq[\(dq, \(dq{\(dq, \(dqdegree\(dq, BCON_UTF8 (\(dqBA\(dq), \(dqschool\(dq, BCON_UTF8 (\(dqVassar\(dq), \(dq}\(dq, \(dq{\(dq, \(dqdegree\(dq, BCON_UTF8 (\(dqPhD\(dq), \(dqschool\(dq, BCON_UTF8 (\(dqYale\(dq), \(dq}\(dq, \(dq]\(dq); /* * Print the document as a JSON string. */ str = bson_as_canonical_extended_json (document, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); /* * Clean up allocated bson documents. */ bson_destroy (document); return 0; } .EE .UNINDENT .UNINDENT .sp Notice that BCON can create arrays, subdocuments and arbitrary fields. .SS Creating BSON from JSON .sp For \fIsingle\fP documents, BSON can be created from JSON strings via \fI\%bson_new_from_json\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX #include int main (int argc, char *argv[]) { bson_error_t error; bson_t *bson; char *string; const char *json = \(dq{\e\(dqname\e\(dq: {\e\(dqfirst\e\(dq:\e\(dqGrace\e\(dq, \e\(dqlast\e\(dq:\e\(dqHopper\e\(dq}}\(dq; bson = bson_new_from_json ((const uint8_t *)json, \-1, &error); if (!bson) { fprintf (stderr, \(dq%s\en\(dq, error.message); return EXIT_FAILURE; } string = bson_as_canonical_extended_json (bson, NULL); printf (\(dq%s\en\(dq, string); bson_free (string); return 0; } .EE .UNINDENT .UNINDENT .sp To initialize BSON from a sequence of JSON documents, use \fI\%bson_json_reader_t\fP\&. .SS Basic CRUD Operations .sp This section demonstrates the basics of using the C Driver to interact with MongoDB. .SS Inserting a Document .sp To insert documents into a collection, first obtain a handle to a \fBmongoc_collection_t\fP via a \fBmongoc_client_t\fP\&. Then, use \fI\%mongoc_collection_insert_one()\fP to add BSON documents to the collection. This example inserts into the database \(dqmydb\(dq and collection \(dqmycoll\(dq. .sp When finished, ensure that allocated structures are freed by using their respective destroy functions. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_collection_t *collection; bson_error_t error; bson_oid_t oid; bson_t *doc; mongoc_init (); client = mongoc_client_new (\(dqmongodb://localhost:27017/?appname=insert\-example\(dq); collection = mongoc_client_get_collection (client, \(dqmydb\(dq, \(dqmycoll\(dq); doc = bson_new (); bson_oid_init (&oid, NULL); BSON_APPEND_OID (doc, \(dq_id\(dq, &oid); BSON_APPEND_UTF8 (doc, \(dqhello\(dq, \(dqworld\(dq); if (!mongoc_collection_insert_one ( collection, doc, NULL, NULL, &error)) { fprintf (stderr, \(dq%s\en\(dq, error.message); } bson_destroy (doc); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp Compile the code and run it: .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o insert insert.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./insert .EE .UNINDENT .UNINDENT .sp On Windows: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 insert.c C:\e> insert .EE .UNINDENT .UNINDENT .sp To verify that the insert succeeded, connect with the MongoDB shell. .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongo MongoDB shell version: 3.0.6 connecting to: test > use mydb switched to db mydb > db.mycoll.find() { \(dq_id\(dq : ObjectId(\(dq55ef43766cb5f36a3bae6ee4\(dq), \(dqhello\(dq : \(dqworld\(dq } > .EE .UNINDENT .UNINDENT .SS Finding a Document .sp To query a MongoDB collection with the C driver, use the function \fI\%mongoc_collection_find_with_opts()\fP\&. This returns a \fI\%cursor\fP to the matching documents. The following examples iterate through the result cursors and print the matches to \fBstdout\fP as JSON strings. .sp Use a document as a query specifier; for example, .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqcolor\(dq : \(dqred\(dq } .EE .UNINDENT .UNINDENT .sp will match any document with a field named \(dqcolor\(dq with value \(dqred\(dq. An empty document \fB{}\fP can be used to match all documents. .sp This first example uses an empty query specifier to find all documents in the database \(dqmydb\(dq and collection \(dqmycoll\(dq. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_collection_t *collection; mongoc_cursor_t *cursor; const bson_t *doc; bson_t *query; char *str; mongoc_init (); client = mongoc_client_new (\(dqmongodb://localhost:27017/?appname=find\-example\(dq); collection = mongoc_client_get_collection (client, \(dqmydb\(dq, \(dqmycoll\(dq); query = bson_new (); cursor = mongoc_collection_find_with_opts (collection, query, NULL, NULL); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } bson_destroy (query); mongoc_cursor_destroy (cursor); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp Compile the code and run it: .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o find find.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./find { \(dq_id\(dq : { \(dq$oid\(dq : \(dq55ef43766cb5f36a3bae6ee4\(dq }, \(dqhello\(dq : \(dqworld\(dq } .EE .UNINDENT .UNINDENT .sp On Windows: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 find.c C:\e> find { \(dq_id\(dq : { \(dq$oid\(dq : \(dq55ef43766cb5f36a3bae6ee4\(dq }, \(dqhello\(dq : \(dqworld\(dq } .EE .UNINDENT .UNINDENT .sp To look for a specific document, add a specifier to \fBquery\fP\&. This example adds a call to \fBBSON_APPEND_UTF8()\fP to look for all documents matching \fB{\(dqhello\(dq : \(dqworld\(dq}\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_collection_t *collection; mongoc_cursor_t *cursor; const bson_t *doc; bson_t *query; char *str; mongoc_init (); client = mongoc_client_new ( \(dqmongodb://localhost:27017/?appname=find\-specific\-example\(dq); collection = mongoc_client_get_collection (client, \(dqmydb\(dq, \(dqmycoll\(dq); query = bson_new (); BSON_APPEND_UTF8 (query, \(dqhello\(dq, \(dqworld\(dq); cursor = mongoc_collection_find_with_opts (collection, query, NULL, NULL); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } bson_destroy (query); mongoc_cursor_destroy (cursor); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o find\-specific find\-specific.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./find\-specific { \(dq_id\(dq : { \(dq$oid\(dq : \(dq55ef43766cb5f36a3bae6ee4\(dq }, \(dqhello\(dq : \(dqworld\(dq } .EE .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 find\-specific.c C:\e> find\-specific { \(dq_id\(dq : { \(dq$oid\(dq : \(dq55ef43766cb5f36a3bae6ee4\(dq }, \(dqhello\(dq : \(dqworld\(dq } .EE .UNINDENT .UNINDENT .SS Updating a Document .sp This code snippet gives an example of using \fI\%mongoc_collection_update_one()\fP to update the fields of a document. .sp Using the \(dqmydb\(dq database, the following example inserts an example document into the \(dqmycoll\(dq collection. Then, using its \fB_id\fP field, the document is updated with different values and a new field. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { mongoc_collection_t *collection; mongoc_client_t *client; bson_error_t error; bson_oid_t oid; bson_t *doc = NULL; bson_t *update = NULL; bson_t *query = NULL; mongoc_init (); client = mongoc_client_new (\(dqmongodb://localhost:27017/?appname=update\-example\(dq); collection = mongoc_client_get_collection (client, \(dqmydb\(dq, \(dqmycoll\(dq); bson_oid_init (&oid, NULL); doc = BCON_NEW (\(dq_id\(dq, BCON_OID (&oid), \(dqkey\(dq, BCON_UTF8 (\(dqold_value\(dq)); if (!mongoc_collection_insert_one (collection, doc, NULL, &error)) { fprintf (stderr, \(dq%s\en\(dq, error.message); goto fail; } query = BCON_NEW (\(dq_id\(dq, BCON_OID (&oid)); update = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_UTF8 (\(dqnew_value\(dq), \(dqupdated\(dq, BCON_BOOL (true), \(dq}\(dq); if (!mongoc_collection_update_one ( collection, query, update, NULL, NULL, &error)) { fprintf (stderr, \(dq%s\en\(dq, error.message); goto fail; } fail: if (doc) bson_destroy (doc); if (query) bson_destroy (query); if (update) bson_destroy (update); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp Compile the code and run it: .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o update update.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./update .EE .UNINDENT .UNINDENT .sp On Windows: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 update.c C:\e> update { \(dq_id\(dq : { \(dq$oid\(dq : \(dq55ef43766cb5f36a3bae6ee4\(dq }, \(dqhello\(dq : \(dqworld\(dq } .EE .UNINDENT .UNINDENT .sp To verify that the update succeeded, connect with the MongoDB shell. .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongo MongoDB shell version: 3.0.6 connecting to: test > use mydb switched to db mydb > db.mycoll.find({\(dqupdated\(dq : true}) { \(dq_id\(dq : ObjectId(\(dq55ef549236fe322f9490e17b\(dq), \(dqupdated\(dq : true, \(dqkey\(dq : \(dqnew_value\(dq } > .EE .UNINDENT .UNINDENT .SS Deleting a Document .sp This example illustrates the use of \fI\%mongoc_collection_delete_one()\fP to delete a document. .sp The following code inserts a sample document into the database \(dqmydb\(dq and collection \(dqmycoll\(dq. Then, it deletes all documents matching \fB{\(dqhello\(dq : \(dqworld\(dq}\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_collection_t *collection; bson_error_t error; bson_oid_t oid; bson_t *doc; mongoc_init (); client = mongoc_client_new (\(dqmongodb://localhost:27017/?appname=delete\-example\(dq); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); doc = bson_new (); bson_oid_init (&oid, NULL); BSON_APPEND_OID (doc, \(dq_id\(dq, &oid); BSON_APPEND_UTF8 (doc, \(dqhello\(dq, \(dqworld\(dq); if (!mongoc_collection_insert_one (collection, doc, NULL, &error)) { fprintf (stderr, \(dqInsert failed: %s\en\(dq, error.message); } bson_destroy (doc); doc = bson_new (); BSON_APPEND_OID (doc, \(dq_id\(dq, &oid); if (!mongoc_collection_delete_one ( collection, doc, NULL, NULL, &error)) { fprintf (stderr, \(dqDelete failed: %s\en\(dq, error.message); } bson_destroy (doc); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp Compile the code and run it: .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o delete delete.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./delete .EE .UNINDENT .UNINDENT .sp On Windows: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 delete.c C:\e> delete .EE .UNINDENT .UNINDENT .sp Use the MongoDB shell to prove that the documents have been removed successfully. .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongo MongoDB shell version: 3.0.6 connecting to: test > use mydb switched to db mydb > db.mycoll.count({\(dqhello\(dq : \(dqworld\(dq}) 0 > .EE .UNINDENT .UNINDENT .SS Counting Documents .sp Counting the number of documents in a MongoDB collection is similar to performing a \fI\%find operation\fP\&. This example counts the number of documents matching \fB{\(dqhello\(dq : \(dqworld\(dq}\fP in the database \(dqmydb\(dq and collection \(dqmycoll\(dq. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_collection_t *collection; bson_error_t error; bson_t *doc; int64_t count; mongoc_init (); client = mongoc_client_new (\(dqmongodb://localhost:27017/?appname=count\-example\(dq); collection = mongoc_client_get_collection (client, \(dqmydb\(dq, \(dqmycoll\(dq); doc = bson_new_from_json ( (const uint8_t *) \(dq{\e\(dqhello\e\(dq : \e\(dqworld\e\(dq}\(dq, \-1, &error); count = mongoc_collection_count ( collection, MONGOC_QUERY_NONE, doc, 0, 0, NULL, &error); if (count < 0) { fprintf (stderr, \(dq%s\en\(dq, error.message); } else { printf (\(dq%\(dq PRId64 \(dq\en\(dq, count); } bson_destroy (doc); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp Compile the code and run it: .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o count count.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./count 1 .EE .UNINDENT .UNINDENT .sp On Windows: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 count.c C:\e> count 1 .EE .UNINDENT .UNINDENT .SS Executing Commands .sp The driver provides helper functions for executing MongoDB commands on client, database and collection structures. The \fB_simple\fP variants return booleans indicating success or failure. .sp This example executes the \fI\%ping\fP command against the database \(dqmydb\(dq. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; bson_error_t error; bson_t *command; bson_t reply; char *str; mongoc_init (); client = mongoc_client_new ( \(dqmongodb://localhost:27017/?appname=executing\-example\(dq); command = BCON_NEW (\(dqping\(dq, BCON_INT32 (1)); if (mongoc_client_command_simple ( client, \(dqmydb\(dq, command, NULL, &reply, &error)) { str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } else { fprintf (stderr, \(dqFailed to run command: %s\en\(dq, error.message); } bson_destroy (command); bson_destroy (&reply); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp Compile the code and run it: .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-o executing executing.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./executing { \(dqok\(dq : { \(dq$numberDouble\(dq : \(dq1.0\(dq }, \(dq$clusterTime\(dq : { \(dqclusterTime\(dq : { \(dq$timestamp\(dq : { \(dqt\(dq : 1682609211, \(dqi\(dq : 1 } }, \(dqsignature\(dq : { \(dqhash\(dq : { \(dq$binary\(dq : { \(dqbase64\(dq : \(dqAAAAAAAAAAAAAAAAAAAAAAAAAAA=\(dq, \(dqsubType\(dq : \(dq00\(dq } }, \(dqkeyId\(dq : { \(dq$numberLong\(dq : \(dq0\(dq } } }, \(dqoperationTime\(dq : { \(dq$timestamp\(dq : { \(dqt\(dq : 1682609211, \(dqi\(dq : 1 } } } .EE .UNINDENT .UNINDENT .sp On Windows: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\e> cl.exe /IC:\emongo\-c\-driver\einclude\elibbson\-1.0 /IC:\emongo\-c\-driver\einclude\elibmongoc\-1.0 executing.c C:\e> executing { \(dqok\(dq : { \(dq$numberDouble\(dq : \(dq1.0\(dq }, \(dq$clusterTime\(dq : { \(dqclusterTime\(dq : { \(dq$timestamp\(dq : { \(dqt\(dq : 1682609211, \(dqi\(dq : 1 } }, \(dqsignature\(dq : { \(dqhash\(dq : { \(dq$binary\(dq : { \(dqbase64\(dq : \(dqAAAAAAAAAAAAAAAAAAAAAAAAAAA=\(dq, \(dqsubType\(dq : \(dq00\(dq } }, \(dqkeyId\(dq : { \(dq$numberLong\(dq : \(dq0\(dq } } }, \(dqoperationTime\(dq : { \(dq$timestamp\(dq : { \(dqt\(dq : 1682609211, \(dqi\(dq : 1 } } } .EE .UNINDENT .UNINDENT .SS Threading .sp The MongoDB C Driver is thread\-unaware in the vast majority of its operations. This means it is up to the programmer to guarantee thread\-safety. .sp However, \fI\%mongoc_client_pool_t\fP is thread\-safe and is used to fetch a \fBmongoc_client_t\fP in a thread\-safe manner. After retrieving a client from the pool, the client structure should be considered owned by the calling thread. When the thread is finished, the client should be placed back into the pool. .sp example\-pool.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-pool.c \-o example\-pool $(pkg\-config \-\-cflags \-\-libs * libmongoc\-1.0) */ /* ./example\-pool [CONNECTION_STRING] */ #include #include #include static pthread_mutex_t mutex; static bool in_shutdown = false; static void * worker (void *data) { mongoc_client_pool_t *pool = data; mongoc_client_t *client; bson_t ping = BSON_INITIALIZER; bson_error_t error; bool r; BSON_APPEND_INT32 (&ping, \(dqping\(dq, 1); while (true) { client = mongoc_client_pool_pop (pool); /* Do something with client. If you are writing an HTTP server, you * probably only want to hold onto the client for the portion of the * request performing database queries. */ r = mongoc_client_command_simple ( client, \(dqadmin\(dq, &ping, NULL, NULL, &error); if (!r) { fprintf (stderr, \(dq%s\en\(dq, error.message); } mongoc_client_pool_push (pool, client); pthread_mutex_lock (&mutex); if (in_shutdown || !r) { pthread_mutex_unlock (&mutex); break; } pthread_mutex_unlock (&mutex); } bson_destroy (&ping); return NULL; } int main (int argc, char *argv[]) { const char *uri_string = \(dqmongodb://127.0.0.1/?appname=pool\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_client_pool_t *pool; pthread_t threads[10]; unsigned i; void *ret; pthread_mutex_init (&mutex, NULL); mongoc_init (); if (argc > 1) { uri_string = argv[1]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } pool = mongoc_client_pool_new (uri); mongoc_client_pool_set_error_api (pool, 2); for (i = 0; i < 10; i++) { pthread_create (&threads[i], NULL, worker, pool); } sleep (10); pthread_mutex_lock (&mutex); in_shutdown = true; pthread_mutex_unlock (&mutex); for (i = 0; i < 10; i++) { pthread_join (threads[i], &ret); } mongoc_client_pool_destroy (pool); mongoc_uri_destroy (uri); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .SS Next Steps .sp To find information on advanced topics, browse the rest of the \fI\%C driver guide\fP or the \fI\%official MongoDB documentation\fP\&. .sp For help with common issues, consult the \fI\%Troubleshooting\fP page. To report a bug or request a new feature, follow \fI\%these instructions\fP\&. .SS Authentication .sp This guide covers the use of authentication options with the MongoDB C Driver. Ensure that the MongoDB server is also properly configured for authentication before making a connection. For more information, see the \fI\%MongoDB security documentation\fP\&. .sp The MongoDB C driver supports several authentication mechanisms through the use of MongoDB connection URIs. .sp By default, if a username and password are provided as part of the connection string (and an optional authentication database), they are used to connect via the default authentication mechanism of the server. .sp To select a specific authentication mechanism other than the default, see the list of supported mechanism below. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_new (\(dqmongodb://user:password@localhost/?authSource=mydb\(dq); .EE .UNINDENT .UNINDENT .sp Currently supported values for the authMechanism connection string option are: .INDENT 0.0 .IP \(bu 2 \fI\%SCRAM\-SHA\-1\fP .IP \(bu 2 \fI\%MONGODB\-CR (deprecated)\fP .IP \(bu 2 \fI\%GSSAPI\fP .IP \(bu 2 \fI\%PLAIN\fP .IP \(bu 2 \fI\%X509\fP .IP \(bu 2 \fI\%MONGODB\-AWS\fP .UNINDENT .SS Basic Authentication (SCRAM\-SHA\-256) .sp MongoDB 4.0 introduces support for authenticating using the SCRAM protocol with the more secure SHA\-256 hash described in \fI\%RFC 7677\fP\&. Using this authentication mechanism means that the password is never actually sent over the wire when authenticating, but rather a computed proof that the client password is the same as the password the server knows. In MongoDB 4.0, the C driver can determine the correct default authentication mechanism for users with stored SCRAM\-SHA\-1 and SCRAM\-SHA\-256 credentials: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_new (\(dqmongodb://user:password@localhost/?authSource=mydb\(dq); /* the correct authMechanism is negotiated between the driver and server. */ .EE .UNINDENT .UNINDENT .sp Alternatively, SCRAM\-SHA\-256 can be explicitly specified as an authMechanism. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_new (\(dqmongodb://user:password@localhost/?authMechanism=SCRAM\-SHA\-256&authSource=mydb\(dq); .EE .UNINDENT .UNINDENT .SS Basic Authentication (SCRAM\-SHA\-1) .sp The default authentication mechanism before MongoDB 4.0 is \fBSCRAM\-SHA\-1\fP (\fI\%RFC 5802\fP). Using this authentication mechanism means that the password is never actually sent over the wire when authenticating, but rather a computed proof that the client password is the same as the password the server knows. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_new (\(dqmongodb://user:password@localhost/?authMechanism=SCRAM\-SHA\-1&authSource=mydb\(dq); .EE .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 \fBSCRAM\-SHA\-1\fP authenticates against the \fBadmin\fP database by default. If the user is created in another database, then specifying the authSource is required. .UNINDENT .UNINDENT .SS Legacy Authentication (MONGODB\-CR) .sp The MONGODB\-CR authMechanism is deprecated and will no longer function in MongoDB 4.0. Instead, specify no authMechanism and the driver will use an authentication mechanism compatible with your server. .SS GSSAPI (Kerberos) Authentication .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 On UNIX\-like environments, Kerberos support requires compiling the driver against \fBcyrus\-sasl\fP\&. .sp On Windows, Kerberos support requires compiling the driver against Windows Native SSPI or \fBcyrus\-sasl\fP\&. The default configuration of the driver will use Windows Native SSPI. .sp To modify the default configuration, use the cmake option \fBENABLE_SASL\fP\&. .UNINDENT .UNINDENT .sp \fBGSSAPI\fP (Kerberos) authentication is available in the Enterprise Edition of MongoDB. To authenticate using \fBGSSAPI\fP, the MongoDB C driver must be installed with SASL support. .sp On UNIX\-like environments, run the \fBkinit\fP command before using the following authentication methods: .INDENT 0.0 .INDENT 3.5 .sp .EX $ kinit mongodbuser@EXAMPLE.COM mongodbuser@EXAMPLE.COM\(aqs Password: $ klistCredentials cache: FILE:/tmp/krb5cc_1000 Principal: mongodbuser@EXAMPLE.COM Issued Expires Principal Feb 9 13:48:51 2013 Feb 9 23:48:51 2013 krbtgt/EXAMPLE.COM@EXAMPLE.COM .EE .UNINDENT .UNINDENT .sp Now authenticate using the MongoDB URI. \fBGSSAPI\fP authenticates against the \fB$external\fP virtual database, so a database does not need to be specified in the URI. Note that the Kerberos principal \fImust\fP be URL\-encoded: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client; client = mongoc_client_new (\(dqmongodb://mongodbuser%40EXAMPLE.COM@mongo\-server.example.com/?authMechanism=GSSAPI\(dq); .EE .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 \fBGSSAPI\fP authenticates against the \fB$external\fP database, so specifying the authSource database is not required. .UNINDENT .UNINDENT .sp The driver supports these GSSAPI properties: .INDENT 0.0 .IP \(bu 2 \fBCANONICALIZE_HOST_NAME\fP: This might be required with Cyrus\-SASL when the hosts report different hostnames than what is used in the Kerberos database. The default is \(dqfalse\(dq. .IP \(bu 2 \fBSERVICE_NAME\fP: Use a different service name than the default, \(dqmongodb\(dq. .UNINDENT .sp Set properties in the URL: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client; client = mongoc_client_new (\(dqmongodb://mongodbuser%40EXAMPLE.COM@mongo\-server.example.com/?authMechanism=GSSAPI&\(dq \(dqauthMechanismProperties=SERVICE_NAME:other,CANONICALIZE_HOST_NAME:true\(dq); .EE .UNINDENT .UNINDENT .sp If you encounter errors such as \fBInvalid net address\fP, check if the application is behind a NAT (Network Address Translation) firewall. If so, create a ticket that uses \fBforwardable\fP and \fBaddressless\fP Kerberos tickets. This can be done by passing \fB\-f \-A\fP to \fBkinit\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX $ kinit \-f \-A mongodbuser@EXAMPLE.COM .EE .UNINDENT .UNINDENT .SS SASL Plain Authentication .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The MongoDB C Driver must be compiled with SASL support in order to use \fBSASL PLAIN\fP authentication. .UNINDENT .UNINDENT .sp MongoDB Enterprise Edition supports the \fBSASL PLAIN\fP authentication mechanism, initially intended for delegating authentication to an LDAP server. Using the \fBSASL PLAIN\fP mechanism is very similar to the challenge response mechanism with usernames and passwords. This authentication mechanism uses the \fB$external\fP virtual database for \fBLDAP\fP support: .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 \fBSASL PLAIN\fP is a clear\-text authentication mechanism. It is strongly recommended to connect to MongoDB using TLS with certificate validation when using the \fBPLAIN\fP mechanism. .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client; client = mongoc_client_new (\(dqmongodb://user:password@example.com/?authMechanism=PLAIN\(dq); .EE .UNINDENT .UNINDENT .sp \fBPLAIN\fP authenticates against the \fB$external\fP database, so specifying the authSource database is not required. .SS X.509 Certificate Authentication .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The MongoDB C Driver must be compiled with TLS support for X.509 authentication support. Once this is done, start a server with the following options: .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongod \-\-tlsMode requireTLS \-\-tlsCertificateKeyFile server.pem \-\-tlsCAFile ca.pem .EE .UNINDENT .UNINDENT .UNINDENT .UNINDENT .sp The \fBMONGODB\-X509\fP mechanism authenticates a username derived from the distinguished subject name of the X.509 certificate presented by the driver during TLS negotiation. This authentication method requires the use of TLS connections with certificate validation. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client; mongoc_ssl_opt_t ssl_opts = { 0 }; ssl_opts.pem_file = \(dqmycert.pem\(dq; ssl_opts.pem_pwd = \(dqmycertpassword\(dq; ssl_opts.ca_file = \(dqmyca.pem\(dq; ssl_opts.ca_dir = \(dqtrust_dir\(dq; ssl_opts.weak_cert_validation = false; client = mongoc_client_new (\(dqmongodb://x509_derived_username@localhost/?authMechanism=MONGODB\-X509\(dq); mongoc_client_set_ssl_opts (client, &ssl_opts); .EE .UNINDENT .UNINDENT .sp \fBMONGODB\-X509\fP authenticates against the \fB$external\fP database, so specifying the authSource database is not required. For more information on the x509_derived_username, see the MongoDB server \fI\%x.509 tutorial\fP\&. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The MongoDB C Driver will attempt to determine the x509 derived username when none is provided, and as of MongoDB 3.4 providing the username is not required at all. .UNINDENT .UNINDENT .SS Authentication via AWS IAM .sp The \fBMONGODB\-AWS\fP mechanism authenticates to MongoDB servers with credentials provided by AWS Identity and Access Management (IAM). .sp To authenticate, create a user with an associated Amazon Resource Name (ARN) on the \fB$external\fP database, and specify the \fBMONGODB\-AWS\fP \fBauthMechanism\fP in the URI. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://localhost/?authMechanism=MONGODB\-AWS\(dq); .EE .UNINDENT .UNINDENT .sp Since \fBMONGODB\-AWS\fP always authenticates against the \fB$external\fP database, so specifying the authSource database is not required. .sp Credentials include the \fBaccess key id\fP, \fBsecret access key\fP, and optional \fBsession token\fP\&. They may be obtained from the following ways. .SS AWS credentials via URI .sp Credentials may be passed directly in the URI as username/password. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://:localhost/?authMechanism=MONGODB\-AWS\(dq); .EE .UNINDENT .UNINDENT .sp This may include a \fBsession token\fP passed with \fBauthMechanismProperties\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://:localhost/?authMechanism=MONGODB\-AWS&authMechanismProperties=AWS_SESSION_TOKEN:\(dq); .EE .UNINDENT .UNINDENT .SS AWS credentials via environment .sp If credentials are not passed through the URI, libmongoc will check for the following environment variables. .INDENT 0.0 .IP \(bu 2 AWS_ACCESS_KEY_ID .IP \(bu 2 AWS_SECRET_ACCESS_KEY .IP \(bu 2 AWS_SESSION_TOKEN (optional) .UNINDENT .SS AWS Credentials via ECS .sp If credentials are not passed in the URI or with environment variables, libmongoc will check if the environment variable \fBAWS_CONTAINER_CREDENTIALS_RELATIVE_URI\fP is set, and if so, attempt to retrieve temporary credentials from the ECS task metadata by querying a link local address. .SS AWS Credentials via EC2 .sp If credentials are not passed in the URI or with environment variables, and the environment variable \fBAWS_CONTAINER_CREDENTIALS_RELATIVE_URI\fP is not set, libmongoc will attempt to retrieve temporary credentials from the EC2 machine metadata by querying link local addresses. .SS Basic Troubleshooting .SS Troubleshooting Checklist .sp The following is a short list of things to check when you have a problem. .INDENT 0.0 .IP \(bu 2 Did you call \fBmongoc_init()\fP in \fBmain()\fP? If not, you will likely see a segfault. .IP \(bu 2 Have you leaked any clients or cursors as can be found with \fBmongoc\-stat \fP? .IP \(bu 2 Have packets been delivered to the server? See egress bytes from \fBmongoc\-stat \fP\&. .IP \(bu 2 Does \fBASAN\fP show any leaks? Ensure you call \fBmongoc_cleanup()\fP at the end of your process to cleanup lingering allocations from the MongoDB C driver. .IP \(bu 2 If compiling your own copy of MongoDB C Driver, consider using the cmake option \fB\-DENABLE_TRACING=ON\fP to enable function tracing and hex dumps of network packets to \fBSTDERR\fP and \fBSTDOUT\fP\&. .UNINDENT .SS Performance Counters .sp The MongoDB C Driver comes with an optional and unique feature to help developers and sysadmins troubleshoot problems in production. Performance counters are available for each process using the C Driver. If available, the counters can be accessed outside of the application process via a shared memory segment. The counters may be used graph statistics about your application process easily from tools like Munin or Nagios. For example, the command \fBwatch \-\-interval=0.5 \-d mongoc\-stat $PID\fP may be used to monitor an application. .sp Performance counters are only available on Linux platforms and macOS arm64 platforms that support shared memory segments. On supported platforms, they are enabled by default. Applications can be built without the counters by specifying the cmake option \fB\-DENABLE_SHM_COUNTERS=OFF\fP\&. Additionally, if performance counters are already compiled, they can be disabled at runtime by specifying the environment variable \fBMONGOC_DISABLE_SHM\fP\&. .sp Performance counters keep track of the following: .INDENT 0.0 .IP \(bu 2 Active and Disposed Cursors .IP \(bu 2 Active and Disposed Clients, Client Pools, and Socket Streams. .IP \(bu 2 Number of operations sent and received, by type. .IP \(bu 2 Bytes transferred and received. .IP \(bu 2 Authentication successes and failures. .IP \(bu 2 Number of wire protocol errors. .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 An operation is considered \(dqsent\(dq when one or more bytes of the corresponding message is written to the stream, regardless of whether the entire message is successfully written or if the operation ultimately succeeds or fails. This does not include bytes that may be written during the stream connection process, such as TLS handshake messages. .UNINDENT .UNINDENT .sp To access counters for a given process, simply provide the process id to the \fBmongoc\-stat\fP program installed with the MongoDB C Driver. .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongoc\-stat 22203 Operations : Egress Total : The number of sent operations. : 13247 Operations : Ingress Total : The number of received operations. : 13246 Operations : Egress Queries : The number of sent Query operations. : 13247 Operations : Ingress Queries : The number of received Query operations. : 0 Operations : Egress GetMore : The number of sent GetMore operations. : 0 Operations : Ingress GetMore : The number of received GetMore operations. : 0 Operations : Egress Insert : The number of sent Insert operations. : 0 Operations : Ingress Insert : The number of received Insert operations. : 0 Operations : Egress Delete : The number of sent Delete operations. : 0 Operations : Ingress Delete : The number of received Delete operations. : 0 Operations : Egress Update : The number of sent Update operations. : 0 Operations : Ingress Update : The number of received Update operations. : 0 Operations : Egress KillCursors : The number of sent KillCursors operations. : 0 Operations : Ingress KillCursors : The number of received KillCursors operations. : 0 Operations : Egress Msg : The number of sent Msg operations. : 0 Operations : Ingress Msg : The number of received Msg operations. : 0 Operations : Egress Reply : The number of sent Reply operations. : 0 Operations : Ingress Reply : The number of received Reply operations. : 13246 Cursors : Active : The number of active cursors. : 1 Cursors : Disposed : The number of disposed cursors. : 13246 Clients : Active : The number of active clients. : 1 Clients : Disposed : The number of disposed clients. : 0 Streams : Active : The number of active streams. : 1 Streams : Disposed : The number of disposed streams. : 0 Streams : Egress Bytes : The number of bytes sent. : 794931 Streams : Ingress Bytes : The number of bytes received. : 589694 Streams : N Socket Timeouts : The number of socket timeouts. : 0 Client Pools : Active : The number of active client pools. : 1 Client Pools : Disposed : The number of disposed client pools. : 0 Protocol : Ingress Errors : The number of protocol errors on ingress. : 0 Auth : Failures : The number of failed authentication requests. : 0 Auth : Success : The number of successful authentication requests. : 0 .EE .UNINDENT .UNINDENT .SS Submitting a Bug Report .sp Think you\(aqve found a bug? Want to see a new feature in the MongoDB C driver? Please open a case in our issue management tool, JIRA: .INDENT 0.0 .IP \(bu 2 \fI\%Create an account and login\fP\&. .IP \(bu 2 Navigate to \fI\%the CDRIVER project\fP\&. .IP \(bu 2 Click \fICreate Issue\fP \- Please provide as much information as possible about the issue type and how to reproduce it. .UNINDENT .sp Bug reports in JIRA for all driver projects (i.e. CDRIVER, CSHARP, JAVA) and the Core Server (i.e. SERVER) project are \fIpublic\fP\&. .SS Guides .SS Configuring TLS .SS Configuration with URI options .sp Enable TLS by including \fBtls=true\fP in the URI. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://localhost:27017/\(dq); mongoc_uri_set_option_as_bool (uri, MONGOC_URI_TLS, true); mongoc_client_t *client = mongoc_client_new_from_uri (uri); .EE .UNINDENT .UNINDENT .sp The following URI options may be used to further configure TLS: .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_TLS T} T{ tls T} T{ {true|false}, indicating if TLS must be used. T} _ T{ MONGOC_URI_TLSCERTIFICATEKEYFILE T} T{ tlscertificatekeyfile T} T{ Path to PEM formatted Private Key, with its Public Certificate concatenated at the end. T} _ T{ MONGOC_URI_TLSCERTIFICATEKEYFILEPASSWORD T} T{ tlscertificatekeypassword T} T{ The password, if any, to use to unlock encrypted Private Key. T} _ T{ MONGOC_URI_TLSCAFILE T} T{ tlscafile T} T{ One, or a bundle of, Certificate Authorities whom should be considered to be trusted. T} _ T{ MONGOC_URI_TLSALLOWINVALIDCERTIFICATES T} T{ tlsallowinvalidcertificates T} T{ Accept and ignore certificate verification errors (e.g. untrusted issuer, expired, etc.) T} _ T{ MONGOC_URI_TLSALLOWINVALIDHOSTNAMES T} T{ tlsallowinvalidhostnames T} T{ Ignore hostname verification of the certificate (e.g. Man In The Middle, using valid certificate, but issued for another hostname) T} _ T{ MONGOC_URI_TLSINSECURE T} T{ tlsinsecure T} T{ {true|false}, indicating if insecure TLS options should be used. Currently this implies MONGOC_URI_TLSALLOWINVALIDCERTIFICATES and MONGOC_URI_TLSALLOWINVALIDHOSTNAMES. T} _ T{ MONGOC_URI_TLSDISABLECERTIFICATEREVOCATIONCHECK T} T{ tlsdisablecertificaterevocationcheck T} T{ {true|false}, indicates if revocation checking (CRL / OCSP) should be disabled. T} _ T{ MONGOC_URI_TLSDISABLEOCSPENDPOINTCHECK T} T{ tlsdisableocspendpointcheck T} T{ {true|false}, indicates if OCSP responder endpoints should not be requested when an OCSP response is not stapled. T} _ .TE .SS Configuration with mongoc_ssl_opt_t .sp Alternatively, the \fI\%mongoc_ssl_opt_t\fP struct may be used to configure TLS with \fI\%mongoc_client_set_ssl_opts()\fP or \fI\%mongoc_client_pool_set_ssl_opts()\fP\&. Most of the configurable options can be set using the \fI\%Connection String URI\fP\&. .TS center; |l|l|. _ T{ \fBmongoc_ssl_opt_t key\fP T} T{ \fBURI key\fP T} _ T{ pem_file T} T{ tlsClientCertificateKeyFile T} _ T{ pem_pwd T} T{ tlsClientCertificateKeyPassword T} _ T{ ca_file T} T{ tlsCAFile T} _ T{ weak_cert_validation T} T{ tlsAllowInvalidCertificates T} _ T{ allow_invalid_hostname T} T{ tlsAllowInvalidHostnames T} _ .TE .sp The only exclusions are \fBcrl_file\fP and \fBca_dir\fP\&. Those may only be set with \fI\%mongoc_ssl_opt_t\fP\&. .SS Client Authentication .sp When MongoDB is started with TLS enabled, it will by default require the client to provide a client certificate issued by a certificate authority specified by \fB\-\-tlsCAFile\fP, or an authority trusted by the native certificate store in use on the server. .sp To provide the client certificate, set the \fBtlsCertificateKeyFile\fP in the URI to a PEM armored certificate file. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://localhost:27017/\(dq); mongoc_uri_set_option_as_bool (uri, MONGOC_URI_TLS, true); mongoc_uri_set_option_as_utf8 (uri, MONGOC_URI_TLSCERTIFICATEKEYFILE, \(dq/path/to/client\-certificate.pem\(dq); mongoc_client_t *client = mongoc_client_new_from_uri (uri); .EE .UNINDENT .UNINDENT .SS Server Certificate Verification .sp The MongoDB C Driver will automatically verify the validity of the server certificate, such as issued by configured Certificate Authority, hostname validation, and expiration. .sp To overwrite this behavior, it is possible to disable hostname validation, OCSP endpoint revocation checking, revocation checking entirely, and allow invalid certificates. .sp This behavior is controlled using the \fBtlsAllowInvalidHostnames\fP, \fBtlsDisableOCSPEndpointCheck\fP, \fBtlsDisableCertificateRevocationCheck\fP, and \fBtlsAllowInvalidCertificates\fP options respectively. By default, all are set to \fBfalse\fP\&. .sp It is not recommended to change these defaults as it exposes the client to \fIMan In The Middle\fP attacks (when \fBtlsAllowInvalidHostnames\fP is set), invalid certificates (when \fBtlsAllowInvalidCertificates\fP is set), or potentially revoked certificates (when \fBtlsDisableOCSPEndpointCheck\fP or \fBtlsDisableCertificateRevocationCheck\fP are set). .SS Supported Libraries .sp By default, libmongoc will attempt to find a supported TLS library and enable TLS support. This is controlled by the cmake flag \fBENABLE_SSL\fP, which is set to \fBAUTO\fP by default. Valid values are: .INDENT 0.0 .IP \(bu 2 \fBAUTO\fP the default behavior. Link to the system\(aqs native TLS library, or attempt to find OpenSSL. .IP \(bu 2 \fBDARWIN\fP link to Secure Transport, the native TLS library on macOS. .IP \(bu 2 \fBWINDOWS\fP link to Secure Channel, the native TLS library on Windows. .IP \(bu 2 \fBOPENSSL\fP link to OpenSSL (libssl). An optional install path may be specified with \fBOPENSSL_ROOT\fP\&. .IP \(bu 2 \fBLIBRESSL\fP link to LibreSSL\(aqs libtls. (LibreSSL\(aqs compatible libssl may be linked to by setting \fBOPENSSL\fP). .IP \(bu 2 \fBOFF\fP disable TLS support. .UNINDENT .SS OpenSSL .sp The MongoDB C Driver uses OpenSSL, if available, on Linux and Unix platforms (besides macOS). Industry best practices and some regulations require the use of TLS 1.1 or newer, which requires at least OpenSSL 1.0.1. Check your OpenSSL version like so: .INDENT 0.0 .INDENT 3.5 .sp .EX $ openssl version .EE .UNINDENT .UNINDENT .sp Ensure your system\(aqs OpenSSL is a recent version (at least 1.0.1), or install a recent version in a non\-system path and build against it with: .INDENT 0.0 .INDENT 3.5 .sp .EX cmake \-DOPENSSL_ROOT_DIR=/absolute/path/to/openssl .EE .UNINDENT .UNINDENT .sp When compiled against OpenSSL, the driver will attempt to load the system default certificate store, as configured by the distribution. That can be overridden by setting the \fBtlsCAFile\fP URI option or with the fields \fBca_file\fP and \fBca_dir\fP in the \fI\%mongoc_ssl_opt_t\fP\&. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is fully supported when using OpenSSL 1.0.1+ with the following notes: .INDENT 0.0 .IP \(bu 2 When a \fBcrl_file\fP is set with \fI\%mongoc_ssl_opt_t\fP, and the \fBcrl_file\fP revokes the server\(aqs certificate, the certificate is considered revoked (even if the certificate has a valid stapled OCSP response) .UNINDENT .SS LibreSSL / libtls .sp The MongoDB C Driver supports LibreSSL through the use of OpenSSL compatibility checks when configured to compile against \fBopenssl\fP\&. It also supports the new \fBlibtls\fP library when configured to build against \fBlibressl\fP\&. .sp When compiled against the Windows native libraries, the \fBcrl_file\fP option of a \fI\%mongoc_ssl_opt_t\fP is not supported, and will issue an error if used. .sp Setting \fBtlsDisableOCSPEndpointCheck\fP and \fBtlsDisableCertificateRevocationCheck\fP has no effect. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is partially supported with the following notes: .INDENT 0.0 .IP \(bu 2 The Must\-Staple extension (see \fI\%RFC 7633\fP) is ignored. Connection may continue if a Must\-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder). .IP \(bu 2 Connection will continue if a Must\-Staple certificate is presented without a stapled response and the OCSP responder is down. .UNINDENT .SS Native TLS Support on Windows (Secure Channel) .sp The MongoDB C Driver supports the Windows native TLS library (Secure Channel, or SChannel), and its native crypto library (Cryptography API: Next Generation, or CNG). .sp When compiled against the Windows native libraries, the \fBca_dir\fP option of a \fI\%mongoc_ssl_opt_t\fP is not supported, and will issue an error if used. .sp Encrypted PEM files (e.g., setting \fBtlsCertificateKeyPassword\fP) are also not supported, and will result in error when attempting to load them. .sp When \fBtlsCAFile\fP is set, the driver will only allow server certificates issued by the authority (or authorities) provided. When no \fBtlsCAFile\fP is set, the driver will look up the Certificate Authority using the \fBSystem Local Machine Root\fP certificate store to confirm the provided certificate. .sp When \fBcrl_file\fP is set with \fI\%mongoc_ssl_opt_t\fP, the driver will import the revocation list to the \fBSystem Local Machine Root\fP certificate store. .sp Setting \fBtlsDisableOCSPEndpointCheck\fP has no effect. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is partially supported with the following notes: .INDENT 0.0 .IP \(bu 2 The Must\-Staple extension (see \fI\%RFC 7633\fP) is ignored. Connection may continue if a Must\-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder). .IP \(bu 2 When a \fBcrl_file\fP is set with \fI\%mongoc_ssl_opt_t\fP, and the \fBcrl_file\fP revokes the server\(aqs certificate, the OCSP response takes precedence. E.g. if the server presents a certificate with a valid stapled OCSP response, the certificate is considered valid even if the \fBcrl_file\fP marks it as revoked. .IP \(bu 2 Connection will continue if a Must\-Staple certificate is presented without a stapled response and the OCSP responder is down. .UNINDENT .SS Native TLS Support on macOS / Darwin (Secure Transport) .sp The MongoDB C Driver supports the Darwin (OS X, macOS, iOS, etc.) native TLS library (Secure Transport), and its native crypto library (Common Crypto, or CC). .sp When compiled against Secure Transport, the \fBca_dir\fP and \fBcrl_file\fP options of a \fI\%mongoc_ssl_opt_t\fP are not supported. An error is issued if either are used. .sp When \fBtlsCAFile\fP is set, the driver will only allow server certificates issued by the authority (or authorities) provided. When no \fBtlsCAFile\fP is set, the driver will use the Certificate Authorities in the currently unlocked keychains. .sp Setting \fBtlsDisableOCSPEndpointCheck\fP and \fBtlsDisableCertificateRevocationCheck\fP has no effect. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is partially supported with the following notes. .INDENT 0.0 .IP \(bu 2 The Must\-Staple extension (see \fI\%RFC 7633\fP) is ignored. Connection may continue if a Must\-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder). .IP \(bu 2 Connection will continue if a Must\-Staple certificate is presented without a stapled response and the OCSP responder is down. .UNINDENT .SS Common Tasks .sp Drivers for some other languages provide helper functions to perform certain common tasks. In the C Driver we must explicitly build commands to send to the server. .SS Setup .sp First we\(aqll write some code to insert sample data: .sp doc\-common\-insert.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* Don\(aqt try to compile this file on its own. It\(aqs meant to be #included by example code */ /* Insert some sample data */ bool insert_data (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; enum N { ndocs = 4 }; bson_t *docs[ndocs]; bson_error_t error; int i = 0; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); docs[0] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (1.0), \(dqtags\(dq, \(dq[\(dq, \(dqdog\(dq, \(dqcat\(dq, \(dq]\(dq); docs[1] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqcat\(dq, \(dq]\(dq); docs[2] = BCON_NEW ( \(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqmouse\(dq, \(dqcat\(dq, \(dqdog\(dq, \(dq]\(dq); docs[3] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (3.0), \(dqtags\(dq, \(dq[\(dq, \(dq]\(dq); for (i = 0; i < ndocs; i++) { mongoc_bulk_operation_insert (bulk, docs[i]); bson_destroy (docs[i]); docs[i] = NULL; } ret = mongoc_bulk_operation_execute (bulk, NULL, &error); if (!ret) { fprintf (stderr, \(dqError inserting data: %s\en\(dq, error.message); } mongoc_bulk_operation_destroy (bulk); return ret; } /* A helper which we\(aqll use a lot later on */ void print_res (const bson_t *reply) { char *str; BSON_ASSERT (reply); str = bson_as_canonical_extended_json (reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } .EE .UNINDENT .UNINDENT .SS \(dqexplain\(dq Command .sp This is how to use the \fBexplain\fP command in MongoDB 3.2+: .sp explain.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool explain (mongoc_collection_t *collection) { bson_t *command; bson_t reply; bson_error_t error; bool res; command = BCON_NEW (\(dqexplain\(dq, \(dq{\(dq, \(dqfind\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqfilter\(dq, \(dq{\(dq, \(dqx\(dq, BCON_INT32 (1), \(dq}\(dq, \(dq}\(dq); res = mongoc_collection_command_simple ( collection, command, NULL, &reply, &error); if (!res) { fprintf (stderr, \(dqError with explain: %s\en\(dq, error.message); goto cleanup; } /* Do something with the reply */ print_res (&reply); cleanup: bson_destroy (&reply); bson_destroy (command); return res; } .EE .UNINDENT .UNINDENT .SS Running the Examples .sp common\-operations.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* * Copyright 2016 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the \(dqLicense\(dq); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE\-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an \(dqAS IS\(dq BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include const char *COLLECTION_NAME = \(dqthings\(dq; #include \(dq../doc\-common\-insert.c\(dq #include \(dqexplain.c\(dq int main (int argc, char *argv[]) { mongoc_database_t *database = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *collection = NULL; mongoc_uri_t *uri = NULL; bson_error_t error; char *host_and_port; int res = 0; if (argc < 2 || argc > 3) { fprintf (stderr, \(dqusage: %s MONGOD\-1\-CONNECTION\-STRING \(dq \(dq[MONGOD\-2\-HOST\-NAME:MONGOD\-2\-PORT]\en\(dq, argv[0]); fprintf (stderr, \(dqMONGOD\-1\-CONNECTION\-STRING can be \(dq \(dqof the following forms:\en\(dq); fprintf (stderr, \(dqlocalhost\et\et\et\etlocal machine\en\(dq); fprintf (stderr, \(dqlocalhost:27018\et\et\et\etlocal machine on port 27018\en\(dq); fprintf (stderr, \(dqmongodb://user:pass@localhost:27017\et\(dq \(dqlocal machine on port 27017, and authenticate with username \(dq \(dquser and password pass\en\(dq); return EXIT_FAILURE; } mongoc_init (); if (strncmp (argv[1], \(dqmongodb://\(dq, 10) == 0) { host_and_port = bson_strdup (argv[1]); } else { host_and_port = bson_strdup_printf (\(dqmongodb://%s\(dq, argv[1]); } uri = mongoc_uri_new_with_error (host_and_port, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, host_and_port, error.message); res = EXIT_FAILURE; goto cleanup; } client = mongoc_client_new_from_uri (uri); if (!client) { res = EXIT_FAILURE; goto cleanup; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqtest\(dq); collection = mongoc_database_get_collection (database, COLLECTION_NAME); printf (\(dqInserting data\en\(dq); if (!insert_data (collection)) { res = EXIT_FAILURE; goto cleanup; } printf (\(dqexplain\en\(dq); if (!explain (collection)) { res = EXIT_FAILURE; goto cleanup; } cleanup: if (collection) { mongoc_collection_destroy (collection); } if (database) { mongoc_database_destroy (database); } if (client) { mongoc_client_destroy (client); } if (uri) { mongoc_uri_destroy (uri); } bson_free (host_and_port); mongoc_cleanup (); return res; } .EE .UNINDENT .UNINDENT .sp First launch two separate instances of mongod (must be done from separate shells): .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongod .EE .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX $ mkdir /tmp/db2 $ mongod \-\-dbpath /tmp/db2 \-\-port 27018 # second instance .EE .UNINDENT .UNINDENT .sp Now compile and run the example program: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cd examples/common_operations/$ gcc \-Wall \-o example common\-operations.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0)$ ./example localhost:27017 localhost:27018 Inserting data explain { \(dqexecutionStats\(dq : { \(dqallPlansExecution\(dq : [], \(dqexecutionStages\(dq : { \(dqadvanced\(dq : 19, \(dqdirection\(dq : \(dqforward\(dq , \(dqdocsExamined\(dq : 76, \(dqexecutionTimeMillisEstimate\(dq : 0, \(dqfilter\(dq : { \(dqx\(dq : { \(dq$eq\(dq : 1 } }, \(dqinvalidates\(dq : 0, \(dqisEOF\(dq : 1, \(dqnReturned\(dq : 19, \(dqneedTime\(dq : 58, \(dqneedYield\(dq : 0, \(dqrestoreState\(dq : 0, \(dqsaveState\(dq : 0, \(dqstage\(dq : \(dqCOLLSCAN\(dq , \(dqworks\(dq : 78 }, \(dqexecutionSuccess\(dq : true, \(dqexecutionTimeMillis\(dq : 0, \(dqnReturned\(dq : 19, \(dqtotalDocsExamined\(dq : 76, \(dqtotalKeysExamined\(dq : 0 }, \(dqok\(dq : 1, \(dqqueryPlanner\(dq : { \(dqindexFilterSet\(dq : false, \(dqnamespace\(dq : \(dqtest.things\(dq, \(dqparsedQuery\(dq : { \(dqx\(dq : { \(dq$eq\(dq : 1 } }, \(dqplannerVersion\(dq : 1, \(dqrejectedPlans\(dq : [], \(dqwinningPlan\(dq : { \(dqdirection\(dq : \(dqforward\(dq , \(dqfilter\(dq : { \(dqx\(dq : { \(dq$eq\(dq : 1 } }, \(dqstage\(dq : \(dqCOLLSCAN\(dq } }, \(dqserverInfo\(dq : { \(dqgitVersion\(dq : \(dq05552b562c7a0b3143a729aaa0838e558dc49b25\(dq , \(dqhost\(dq : \(dqMacBook\-Pro\-57.local\(dq, \(dqport\(dq : 27017, \(dqversion\(dq : \(dq3.2.6\(dq } } .EE .UNINDENT .UNINDENT .SS Advanced Connections .sp The following guide contains information specific to certain types of MongoDB configurations. .sp For an example of connecting to a simple standalone server, see the \fI\%Tutorial\fP\&. To establish a connection with authentication options enabled, see the \fI\%Authentication\fP page. To see an example of a connection with data compression, see the \fI\%Data Compression\fP page. .SS Connecting to a Replica Set .sp Connecting to a \fI\%replica set\fP is much like connecting to a standalone MongoDB server. Simply specify the replica set name using the \fB?replicaSet=myreplset\fP URI option. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_init (); /* Create our MongoDB Client */ client = mongoc_client_new ( \(dqmongodb://host01:27017,host02:27017,host03:27017/?replicaSet=myreplset\(dq); /* Do some work */ /* TODO */ /* Clean up */ mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp \fBTIP:\fP .INDENT 0.0 .INDENT 3.5 Multiple hostnames can be specified in the MongoDB connection string URI, with a comma separating hosts in the seed list. .sp It is recommended to use a seed list of members of the replica set to allow the driver to connect to any node. .UNINDENT .UNINDENT .SS Connecting to a Sharded Cluster .sp To connect to a \fI\%sharded cluster\fP, specify the \fBmongos\fP nodes the client should connect to. The C Driver will automatically detect that it has connected to a \fBmongos\fP sharding server. .sp If more than one hostname is specified, a seed list will be created to attempt failover between the \fBmongos\fP instances. .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 Specifying the \fBreplicaSet\fP parameter when connecting to a \fBmongos\fP sharding server is invalid. .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_init (); /* Create our MongoDB Client */ client = mongoc_client_new (\(dqmongodb://myshard01:27017/\(dq); /* Do something with client ... */ /* Free the client */ mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .SS Connecting to an IPv6 Address .sp The MongoDB C Driver will automatically resolve IPv6 addresses from host names. However, to specify an IPv6 address directly, wrap the address in \fB[]\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://[::1]:27017\(dq); .EE .UNINDENT .UNINDENT .SS Connecting with IPv4 and IPv6 .sp If connecting to a hostname that has both IPv4 and IPv6 DNS records, the behavior follows \fI\%RFC\-6555\fP\&. A connection to the IPv6 address is attempted first. If IPv6 fails, then a connection is attempted to the IPv4 address. If the connection attempt to IPv6 does not complete within 250ms, then IPv4 is tried in parallel. Whichever succeeds connection first cancels the other. The successful DNS result is cached for 10 minutes. .sp As a consequence, attempts to connect to a mongod only listening on IPv4 may be delayed if there are both A (IPv4) and AAAA (IPv6) DNS records associated with the host. .sp To avoid a delay, configure hostnames to match the MongoDB configuration. That is, only create an A record if the mongod is only listening on IPv4. .SS Connecting to a UNIX Domain Socket .sp On UNIX\-like systems, the C Driver can connect directly to a MongoDB server using a UNIX domain socket. Pass the URL\-encoded path to the socket, which \fImust\fP be suffixed with \fB\&.sock\fP\&. For example, to connect to a domain socket at \fB/tmp/mongodb\-27017.sock\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://%2Ftmp%2Fmongodb\-27017.sock\(dq); .EE .UNINDENT .UNINDENT .sp Include username and password like so: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://user:pass@%2Ftmp%2Fmongodb\-27017.sock\(dq); .EE .UNINDENT .UNINDENT .SS Connecting to a server over TLS .sp These are instructions for configuring TLS/SSL connections. .sp To run a server locally (on port 27017, for example): .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongod \-\-port 27017 \-\-tlsMode requireTLS \-\-tlsCertificateKeyFile server.pem \-\-tlsCAFile ca.pem .EE .UNINDENT .UNINDENT .sp Add \fB/?tls=true\fP to the end of a client URI. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = NULL; client = mongoc_client_new (\(dqmongodb://localhost:27017/?tls=true\(dq); .EE .UNINDENT .UNINDENT .sp MongoDB requires client certificates by default, unless the \fB\-\-tlsAllowConnectionsWithoutCertificates\fP is provided. The C Driver can be configured to present a client certificate using the URI option \fBtlsCertificateKeyFile\fP, which may be referenced through the constant \fBMONGOC_URI_TLSCERTIFICATEKEYFILE\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = NULL; mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://localhost:27017/?tls=true\(dq); mongoc_uri_set_option_as_utf8 (uri, MONGOC_URI_TLSCERTIFICATEKEYFILE, \(dqclient.pem\(dq); client = mongoc_client_new_from_uri (uri); .EE .UNINDENT .UNINDENT .sp The client certificate provided by \fBtlsCertificateKeyFile\fP must be issued by one of the server trusted Certificate Authorities listed in \fB\-\-tlsCAFile\fP, or issued by a CA in the native certificate store on the server when omitted. .sp See \fI\%Configuring TLS\fP for more information on the various TLS related options. .SS Compressing data to and from MongoDB .sp This content has been relocated to the \fI\%Data Compression\fP page. .SS Additional Connection Options .sp The full list of connection options can be found in the \fI\%mongoc_uri_t\fP docs. .sp Certain socket/connection related options are not configurable: .TS center; |l|l|l|. _ T{ Option T} T{ Description T} T{ Value T} _ T{ SO_KEEPALIVE T} T{ TCP Keep Alive T} T{ Enabled T} _ T{ TCP_KEEPIDLE T} T{ How long a connection needs to remain idle before TCP starts sending keepalive probes T} T{ 120 seconds T} _ T{ TCP_KEEPINTVL T} T{ The time in seconds between TCP probes T} T{ 10 seconds T} _ T{ TCP_KEEPCNT T} T{ How many probes to send, without acknowledgement, before dropping the connection T} T{ 9 probes T} _ T{ TCP_NODELAY T} T{ Send packets as soon as possible or buffer small packets (Nagle algorithm) T} T{ Enabled (no buffering) T} _ .TE .SS Connection Pooling .sp The MongoDB C driver has two connection modes: single\-threaded and pooled. Single\-threaded mode is optimized for embedding the driver within languages like PHP. Multi\-threaded programs should use pooled mode: this mode minimizes the total connection count, and in pooled mode background threads monitor the MongoDB server topology, so the program need not block to scan it. .SS Single Mode .sp In single mode, your program creates a \fI\%mongoc_client_t\fP directly: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_new ( \(dqmongodb://hostA,hostB/?replicaSet=my_rs\(dq); .EE .UNINDENT .UNINDENT .sp The client connects on demand when your program first uses it for a MongoDB operation. Using a non\-blocking socket per server, it begins a check on each server concurrently, and uses the asynchronous \fBpoll\fP or \fBselect\fP function to receive events from the sockets, until all have responded or timed out. Put another way, in single\-threaded mode the C Driver fans out to begin all checks concurrently, then fans in once all checks have completed or timed out. Once the scan completes, the client executes your program\(aqs operation and returns. .sp In single mode, the client re\-scans the server topology roughly once per minute. If more than a minute has elapsed since the previous scan, the next operation on the client will block while the client completes its scan. This interval is configurable with \fBheartbeatFrequencyMS\fP in the connection string. (See \fI\%mongoc_uri_t\fP\&.) .sp A single client opens one connection per server in your topology: these connections are used both for scanning the topology and performing normal operations. .SS Pooled Mode .sp To activate pooled mode, create a \fI\%mongoc_client_pool_t\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new ( \(dqmongodb://hostA,hostB/?replicaSet=my_rs\(dq); mongoc_client_pool_t *pool = mongoc_client_pool_new (uri); .EE .UNINDENT .UNINDENT .sp When your program first calls \fI\%mongoc_client_pool_pop()\fP, the pool launches monitoring threads in the background. Monitoring threads independently connect to all servers in the connection string. As monitoring threads receive hello responses from the servers, they update the shared view of the server topology. Additional monitoring threads and connections are created as new servers are discovered. Monitoring threads are terminated when servers are removed from the shared view of the server topology. .sp Each thread that executes MongoDB operations must check out a client from the pool: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_pool_pop (pool); /* use the client for operations ... */ mongoc_client_pool_push (pool, client); .EE .UNINDENT .UNINDENT .sp The \fI\%mongoc_client_t\fP object is not thread\-safe, only the \fI\%mongoc_client_pool_t\fP is. .sp When the driver is in pooled mode, your program\(aqs operations are unblocked as soon as monitoring discovers a usable server. For example, if a thread in your program is waiting to execute an \(dqinsert\(dq on the primary, it is unblocked as soon as the primary is discovered, rather than waiting for all secondaries to be checked as well. .sp The pool opens one connection per server for monitoring, and each client opens its own connection to each server it uses for application operations. Background monitoring threads re\-scan servers independently roughly every 10 seconds. This interval is configurable with \fBheartbeatFrequencyMS\fP in the connection string. (See \fI\%mongoc_uri_t\fP\&.) .sp The connection string can also specify \fBwaitQueueTimeoutMS\fP to limit the time that \fI\%mongoc_client_pool_pop()\fP will wait for a client from the pool. (See \fI\%mongoc_uri_t\fP\&.) If \fBwaitQueueTimeoutMS\fP is specified, then it is necessary to confirm that a client was actually returned: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new ( \(dqmongodb://hostA,hostB/?replicaSet=my_rs&waitQueueTimeoutMS=1000\(dq); mongoc_client_pool_t *pool = mongoc_client_pool_new (uri); mongoc_client_t *client = mongoc_client_pool_pop (pool); if (client) { /* use the client for operations ... */ mongoc_client_pool_push (pool, client); } else { /* take appropriate action for a timeout */ } .EE .UNINDENT .UNINDENT .sp See \fI\%Connection Pool Options\fP to configure pool size and behavior, and see \fI\%mongoc_client_pool_t\fP for an extended example of a multi\-threaded program that uses the driver in pooled mode. .SS Data Compression .sp The following guide explains how data compression support works between the MongoDB server and client. It also shows an example of how to connect to a server with data compression. .SS Compressing data to and from MongoDB .sp MongoDB 3.4 added Snappy compression support, while zlib compression was added in 3.6, and zstd compression in 4.2. To enable compression support the client must be configured with which compressors to use: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = NULL; client = mongoc_client_new (\(dqmongodb://localhost:27017/?compressors=snappy,zlib,zstd\(dq); .EE .UNINDENT .UNINDENT .sp The \fBcompressors\fP option specifies the priority order of compressors the client wants to use. Messages are compressed if the client and server share any compressors in common. .sp Note that the compressor used by the server might not be the same compressor as the client used. For example, if the client uses the connection string \fBcompressors=zlib,snappy\fP the client will use \fBzlib\fP compression to send data (if possible), but the server might still reply using \fBsnappy\fP, depending on how the server was configured. .sp The driver must be built with zlib and/or snappy and/or zstd support to enable compression support, any unknown (or not compiled in) compressor value will be ignored. .SS Cursors .SS Handling Cursor Failures .sp Cursors exist on a MongoDB server. However, the \fBmongoc_cursor_t\fP structure gives the local process a handle to the cursor. It is possible for errors to occur on the server while iterating a cursor on the client. Even a network partition may occur. This means that applications should be robust in handling cursor failures. .sp While iterating cursors, you should check to see if an error has occurred. See the following example for how to robustly check for errors. .INDENT 0.0 .INDENT 3.5 .sp .EX static void print_all_documents (mongoc_collection_t *collection) { mongoc_cursor_t *cursor; const bson_t *doc; bson_error_t error; bson_t query = BSON_INITIALIZER; char *str; cursor = mongoc_collection_find_with_opts (collection, query, NULL, NULL); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqFailed to iterate all documents: %s\en\(dq, error.message); } mongoc_cursor_destroy (cursor); } .EE .UNINDENT .UNINDENT .SS Destroying Server\-Side Cursors .sp The MongoDB C driver will automatically destroy a server\-side cursor when \fI\%mongoc_cursor_destroy()\fP is called. Failure to call this function when done with a cursor will leak memory client side as well as consume extra memory server side. If the cursor was configured to never timeout, it will become a memory leak on the server. .SS Tailable Cursors .sp Tailable cursors are cursors that remain open even after they\(aqve returned a final result. This way, if more documents are added to a collection (i.e., to the cursor\(aqs result set), then you can continue to call \fI\%mongoc_cursor_next()\fP to retrieve those additional results. .sp Here\(aqs a complete test case that demonstrates the use of tailable cursors. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Tailable cursors are for capped collections only. .UNINDENT .UNINDENT .sp An example to tail the oplog from a replica set. .sp mongoc\-tail.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include #ifdef _WIN32 #define sleep(_n) Sleep ((_n) *1000) #endif static void print_bson (const bson_t *b) { char *str; str = bson_as_canonical_extended_json (b, NULL); fprintf (stdout, \(dq%s\en\(dq, str); bson_free (str); } static mongoc_cursor_t * query_collection (mongoc_collection_t *collection, uint32_t last_time) { mongoc_cursor_t *cursor; bson_t query; bson_t gt; bson_t opts; BSON_ASSERT (collection); bson_init (&query); BSON_APPEND_DOCUMENT_BEGIN (&query, \(dqts\(dq, >); BSON_APPEND_TIMESTAMP (>, \(dq$gt\(dq, last_time, 0); bson_append_document_end (&query, >); bson_init (&opts); BSON_APPEND_BOOL (&opts, \(dqtailable\(dq, true); BSON_APPEND_BOOL (&opts, \(dqawaitData\(dq, true); cursor = mongoc_collection_find_with_opts (collection, &query, &opts, NULL); bson_destroy (&query); bson_destroy (&opts); return cursor; } static void tail_collection (mongoc_collection_t *collection) { mongoc_cursor_t *cursor; uint32_t last_time; const bson_t *doc; bson_error_t error; bson_iter_t iter; BSON_ASSERT (collection); last_time = (uint32_t) time (NULL); while (true) { cursor = query_collection (collection, last_time); while (!mongoc_cursor_error (cursor, &error) && mongoc_cursor_more (cursor)) { if (mongoc_cursor_next (cursor, &doc)) { if (bson_iter_init_find (&iter, doc, \(dqts\(dq) && BSON_ITER_HOLDS_TIMESTAMP (&iter)) { bson_iter_timestamp (&iter, &last_time, NULL); } print_bson (doc); } } if (mongoc_cursor_error (cursor, &error)) { if (error.domain == MONGOC_ERROR_SERVER) { fprintf (stderr, \(dq%s\en\(dq, error.message); exit (1); } } mongoc_cursor_destroy (cursor); sleep (1); } } int main (int argc, char *argv[]) { mongoc_collection_t *collection; mongoc_client_t *client; mongoc_uri_t *uri; bson_error_t error; if (argc != 2) { fprintf (stderr, \(dqusage: %s MONGO_URI\en\(dq, argv[0]); return EXIT_FAILURE; } mongoc_init (); uri = mongoc_uri_new_with_error (argv[1], &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, argv[1], error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqlocal\(dq, \(dqoplog.rs\(dq); tail_collection (collection); mongoc_collection_destroy (collection); mongoc_uri_destroy (uri); mongoc_client_destroy (client); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Let\(aqs compile and run this example against a replica set to see updates as they are made. .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-Wall \-o mongoc\-tail mongoc\-tail.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./mongoc\-tail mongodb://example.com/?replicaSet=myReplSet { \(dqh\(dq : \-8458503739429355503, \(dqns\(dq : \(dqtest.test\(dq, \(dqo\(dq : { \(dq_id\(dq : { \(dq$oid\(dq : \(dq5372ab0a25164be923d10d50\(dq } }, \(dqop\(dq : \(dqi\(dq, \(dqts\(dq : { \(dq$timestamp\(dq : { \(dqi\(dq : 1, \(dqt\(dq : 1400023818 } }, \(dqv\(dq : 2 } .EE .UNINDENT .UNINDENT .sp The line of output is a sample from performing \fBdb.test.insert({})\fP from the mongo shell on the replica set. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_cursor_set_max_await_time_ms()\fP\&. .fi .sp .UNINDENT .UNINDENT .SS Bulk Write Operations .sp This tutorial explains how to take advantage of MongoDB C driver bulk write operation features. Executing write operations in batches reduces the number of network round trips, increasing write throughput. .SS Bulk Insert .sp First we need to fetch a bulk operation handle from the \fI\%mongoc_collection_t\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_bulk_operation_t *bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); .EE .UNINDENT .UNINDENT .sp We can now start inserting documents to the bulk operation. These will be buffered until we execute the operation. .sp The bulk operation will coalesce insertions as a single batch for each consecutive call to \fI\%mongoc_bulk_operation_insert()\fP\&. This creates a pipelined effect when possible. .sp To execute the bulk operation and receive the result we call \fI\%mongoc_bulk_operation_execute()\fP\&. .sp bulk1.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk1 (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; int i; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); for (i = 0; i < 10000; i++) { doc = BCON_NEW (\(dqi\(dq, BCON_INT32 (i)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); } ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { fprintf (stderr, \(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk1\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk1 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document: .INDENT 0.0 .INDENT 3.5 .sp .EX {\(dqnInserted\(dq : 10000, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [] \(dqwriteConcernErrors\(dq : [] } .EE .UNINDENT .UNINDENT .SS Mixed Bulk Write Operations .sp MongoDB C driver also supports executing mixed bulk write operations. A batch of insert, update, and remove operations can be executed together using the bulk write operations API. .SS Ordered Bulk Write Operations .sp Ordered bulk write operations are batched and sent to the server in the order provided for serial execution. The \fBreply\fP document describes the type and count of operations performed. .sp bulk2.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk2 (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *query; bson_t *doc; bson_t *opts; bson_t reply; char *str; bool ret; int i; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); /* Remove everything */ query = bson_new (); mongoc_bulk_operation_remove (bulk, query); bson_destroy (query); /* Add a few documents */ for (i = 1; i < 4; i++) { doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (i)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); } /* {_id: 1} => {$set: {foo: \(dqbar\(dq}} */ query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (1)); doc = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqfoo\(dq, BCON_UTF8 (\(dqbar\(dq), \(dq}\(dq); mongoc_bulk_operation_update_many_with_opts (bulk, query, doc, NULL, &error); bson_destroy (query); bson_destroy (doc); /* {_id: 4} => {\(aq$inc\(aq: {\(aqj\(aq: 1}} (upsert) */ opts = BCON_NEW (\(dqupsert\(dq, BCON_BOOL (true)); query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (4)); doc = BCON_NEW (\(dq$inc\(dq, \(dq{\(dq, \(dqj\(dq, BCON_INT32 (1), \(dq}\(dq); mongoc_bulk_operation_update_many_with_opts (bulk, query, doc, opts, &error); bson_destroy (query); bson_destroy (doc); bson_destroy (opts); /* replace {j:1} with {j:2} */ query = BCON_NEW (\(dqj\(dq, BCON_INT32 (1)); doc = BCON_NEW (\(dqj\(dq, BCON_INT32 (2)); mongoc_bulk_operation_replace_one_with_opts (bulk, query, doc, NULL, &error); bson_destroy (query); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk2\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk2 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 3, \(dqnMatched\(dq : 2, \(dqnModified\(dq : 2, \(dqnRemoved\(dq : 10000, \(dqnUpserted\(dq : 1, \(dqupserted\(dq : [{\(dqindex\(dq : 5, \(dq_id\(dq : 4}], \(dqwriteErrors\(dq : [] \(dqwriteConcernErrors\(dq : [] } .EE .UNINDENT .UNINDENT .sp The \fBindex\fP field in the \fBupserted\fP array is the 0\-based index of the upsert operation; in this example, the sixth operation of the overall bulk operation was an upsert, so its index is 5. .SS Unordered Bulk Write Operations .sp Unordered bulk write operations are batched and sent to the server in \fIarbitrary order\fP where they may be executed in parallel. Any errors that occur are reported after all operations are attempted. .sp In the next example the first and third operations fail due to the unique constraint on \fB_id\fP\&. Since we are doing unordered execution the second and fourth operations succeed. .sp bulk3.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk3 (mongoc_collection_t *collection) { bson_t opts = BSON_INITIALIZER; mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *query; bson_t *doc; bson_t reply; char *str; bool ret; /* false indicates unordered */ BSON_APPEND_BOOL (&opts, \(dqordered\(dq, false); bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts); bson_destroy (&opts); /* Add a document */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (1)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* remove {_id: 2} */ query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (2)); mongoc_bulk_operation_remove_one (bulk, query); bson_destroy (query); /* insert {_id: 3} */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (3)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* replace {_id:4} {\(aqi\(aq: 1} */ query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (4)); doc = BCON_NEW (\(dqi\(dq, BCON_INT32 (1)); mongoc_bulk_operation_replace_one (bulk, query, doc, false); bson_destroy (query); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); bson_destroy (&opts); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk3\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk3 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 0, \(dqnMatched\(dq : 1, \(dqnModified\(dq : 1, \(dqnRemoved\(dq : 1, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [ { \(dqindex\(dq : 0, \(dqcode\(dq : 11000, \(dqerrmsg\(dq : \(dqE11000 duplicate key error index: test.test.$_id_ dup key: { : 1 }\(dq }, { \(dqindex\(dq : 2, \(dqcode\(dq : 11000, \(dqerrmsg\(dq : \(dqE11000 duplicate key error index: test.test.$_id_ dup key: { : 3 }\(dq } ], \(dqwriteConcernErrors\(dq : [] } Error: E11000 duplicate key error index: test.test.$_id_ dup key: { : 1 } .EE .UNINDENT .UNINDENT .sp The \fI\%bson_error_t\fP domain is \fBMONGOC_ERROR_COMMAND\fP and its code is 11000. .SS Bulk Operation Bypassing Document Validation .sp This feature is only available when using MongoDB 3.2 and later. .sp By default bulk operations are validated against the schema, if any is defined. In certain cases however it may be necessary to bypass the document validation. .sp bulk5.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk5_fail (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); /* Two inserts */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (31)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (32)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* The above documents do not comply to the schema validation rules * we created previously, so this will result in an error */ ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } static void bulk5_success (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); /* Allow this document to bypass document validation. * NOTE: When authentication is enabled, the authenticated user must have * either the \(dqdbadmin\(dq or \(dqrestore\(dq roles to bypass document validation */ mongoc_bulk_operation_set_bypass_document_validation (bulk, true); /* Two inserts */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (31)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (32)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } int main (void) { bson_t *options; bson_error_t error; mongoc_client_t *client; mongoc_collection_t *collection; mongoc_database_t *database; const char *uri_string = \(dqmongodb://localhost/?appname=bulk5\-example\(dq; mongoc_uri_t *uri; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqtestasdf\(dq); /* Create schema validator */ options = BCON_NEW ( \(dqvalidator\(dq, \(dq{\(dq, \(dqnumber\(dq, \(dq{\(dq, \(dq$gte\(dq, BCON_INT32 (5), \(dq}\(dq, \(dq}\(dq); collection = mongoc_database_create_collection (database, \(dqcollname\(dq, options, &error); if (collection) { bulk5_fail (collection); bulk5_success (collection); mongoc_collection_destroy (collection); } else { fprintf (stderr, \(dqCouldn\(aqt create collection: \(aq%s\(aq\en\(dq, error.message); } bson_free (options); mongoc_uri_destroy (uri); mongoc_database_destroy (database); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Running the above example will result in: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 0, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [ { \(dqindex\(dq : 0, \(dqcode\(dq : 121, \(dqerrmsg\(dq : \(dqDocument failed validation\(dq } ] } Error: Document failed validation { \(dqnInserted\(dq : 2, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [] } .EE .UNINDENT .UNINDENT .sp The \fI\%bson_error_t\fP domain is \fBMONGOC_ERROR_COMMAND\fP\&. .SS Bulk Operation Write Concerns .sp By default bulk operations are executed with the \fI\%write_concern\fP of the collection they are executed against. A custom write concern can be passed to the \fI\%mongoc_collection_create_bulk_operation_with_opts()\fP method. Write concern errors (e.g. wtimeout) will be reported after all operations are attempted, regardless of execution order. .sp bulk4.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk4 (mongoc_collection_t *collection) { bson_t opts = BSON_INITIALIZER; mongoc_write_concern_t *wc; mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; wc = mongoc_write_concern_new (); mongoc_write_concern_set_w (wc, 4); mongoc_write_concern_set_wtimeout_int64 (wc, 100); /* milliseconds */ mongoc_write_concern_append (wc, &opts); bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts); /* Two inserts */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (10)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (11)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); mongoc_write_concern_destroy (wc); bson_destroy (&opts); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk4\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk4 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document and error message: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 2, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [], \(dqwriteConcernErrors\(dq : [ { \(dqcode\(dq : 64, \(dqerrmsg\(dq : \(dqwaiting for replication timed out\(dq } ] } Error: waiting for replication timed out .EE .UNINDENT .UNINDENT .sp The \fI\%bson_error_t\fP domain is \fBMONGOC_ERROR_WRITE_CONCERN\fP if there are write concern errors and no write errors. Write errors indicate failed operations, so they take precedence over write concern errors, which mean merely that the write concern is not satisfied \fIyet\fP\&. .SS Setting Collation Order .sp This feature is only available when using MongoDB 3.4 and later. .sp bulk\-collation.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include static void bulk_collation (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_t *opts; bson_t *doc; bson_t *selector; bson_t *update; bson_error_t error; bson_t reply; char *str; uint32_t ret; /* insert {_id: \(dqone\(dq} and {_id: \(dqOne\(dq} */ bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); doc = BCON_NEW (\(dq_id\(dq, BCON_UTF8 (\(dqone\(dq)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_UTF8 (\(dqOne\(dq)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* \(dqOne\(dq normally sorts before \(dqone\(dq; make \(dqone\(dq come first */ opts = BCON_NEW (\(dqcollation\(dq, \(dq{\(dq, \(dqlocale\(dq, BCON_UTF8 (\(dqen_US\(dq), \(dqcaseFirst\(dq, BCON_UTF8 (\(dqlower\(dq), \(dq}\(dq); /* set x=1 on the document with _id \(dqOne\(dq, which now sorts after \(dqone\(dq */ update = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqx\(dq, BCON_INT64 (1), \(dq}\(dq); selector = BCON_NEW (\(dq_id\(dq, \(dq{\(dq, \(dq$gt\(dq, BCON_UTF8 (\(dqone\(dq), \(dq}\(dq); mongoc_bulk_operation_update_one_with_opts ( bulk, selector, update, opts, &error); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); bson_destroy (update); bson_destroy (selector); bson_destroy (opts); mongoc_bulk_operation_destroy (bulk); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk\-collation\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqdb\(dq, \(dqcollection\(dq); bulk_collation (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Running the above example will result in: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 2, \(dqnMatched\(dq : 1, \(dqnModified\(dq : 1, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [ ] } .EE .UNINDENT .UNINDENT .SS Unacknowledged Bulk Writes .sp Set \(dqw\(dq to zero for an unacknowledged write. The driver sends unacknowledged writes using the legacy opcodes \fBOP_INSERT\fP, \fBOP_UPDATE\fP, and \fBOP_DELETE\fP\&. .sp bulk6.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include static void bulk6 (mongoc_collection_t *collection) { bson_t opts = BSON_INITIALIZER; mongoc_write_concern_t *wc; mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t *selector; bson_t reply; char *str; bool ret; wc = mongoc_write_concern_new (); mongoc_write_concern_set_w (wc, 0); mongoc_write_concern_append (wc, &opts); bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (10)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); selector = BCON_NEW (\(dq_id\(dq, BCON_INT32 (11)); mongoc_bulk_operation_remove_one (bulk, selector); bson_destroy (selector); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); mongoc_write_concern_destroy (wc); bson_destroy (&opts); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk6\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk6 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp The \fBreply\fP document is empty: .INDENT 0.0 .INDENT 3.5 .sp .EX { } .EE .UNINDENT .UNINDENT .SS Further Reading .sp See the \fI\%Driver Bulk API Spec\fP, which describes bulk write operations for all MongoDB drivers. .SS Aggregation Framework Examples .sp This document provides a number of practical examples that display the capabilities of the aggregation framework. .sp The \fI\%Aggregations using the Zip Codes Data Set\fP examples uses a publicly available data set of all zipcodes and populations in the United States. These data are available at: \fI\%zips.json\fP\&. .SS Requirements .sp Let\(aqs check if everything is installed. .sp Use the following command to load zips.json data set into mongod instance: .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongoimport \-\-drop \-d test \-c zipcodes zips.json .EE .UNINDENT .UNINDENT .sp Let\(aqs use the MongoDB shell to verify that everything was imported successfully. .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongo test connecting to: test > db.zipcodes.count() 29467 > db.zipcodes.findOne() { \(dq_id\(dq : \(dq35004\(dq, \(dqcity\(dq : \(dqACMAR\(dq, \(dqloc\(dq : [ \-86.51557, 33.584132 ], \(dqpop\(dq : 6055, \(dqstate\(dq : \(dqAL\(dq } .EE .UNINDENT .UNINDENT .SS Aggregations using the Zip Codes Data Set .sp Each document in this collection has the following form: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dq35004\(dq, \(dqcity\(dq : \(dqAcmar\(dq, \(dqstate\(dq : \(dqAL\(dq, \(dqpop\(dq : 6055, \(dqloc\(dq : [\-86.51557, 33.584132] } .EE .UNINDENT .UNINDENT .sp In these documents: .INDENT 0.0 .IP \(bu 2 The \fB_id\fP field holds the zipcode as a string. .IP \(bu 2 The \fBcity\fP field holds the city name. .IP \(bu 2 The \fBstate\fP field holds the two letter state abbreviation. .IP \(bu 2 The \fBpop\fP field holds the population. .IP \(bu 2 The \fBloc\fP field holds the location as a \fB[latitude, longitude]\fP array. .UNINDENT .SS States with Populations Over 10 Million .sp To get all states with a population greater than 10 million, use the following aggregation pipeline: .sp aggregation1.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include static void print_pipeline (mongoc_collection_t *collection) { mongoc_cursor_t *cursor; bson_error_t error; const bson_t *doc; bson_t *pipeline; char *str; pipeline = BCON_NEW (\(dqpipeline\(dq, \(dq[\(dq, \(dq{\(dq, \(dq$group\(dq, \(dq{\(dq, \(dq_id\(dq, \(dq$state\(dq, \(dqtotal_pop\(dq, \(dq{\(dq, \(dq$sum\(dq, \(dq$pop\(dq, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$match\(dq, \(dq{\(dq, \(dqtotal_pop\(dq, \(dq{\(dq, \(dq$gte\(dq, BCON_INT32 (10000000), \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq]\(dq); cursor = mongoc_collection_aggregate ( collection, MONGOC_QUERY_NONE, pipeline, NULL, NULL); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqCursor Failure: %s\en\(dq, error.message); } mongoc_cursor_destroy (cursor); bson_destroy (pipeline); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost:27017/?appname=aggregation\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqzipcodes\(dq); print_pipeline (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp You should see a result like the following: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqPA\(dq, \(dqtotal_pop\(dq : 11881643 } { \(dq_id\(dq : \(dqOH\(dq, \(dqtotal_pop\(dq : 10847115 } { \(dq_id\(dq : \(dqNY\(dq, \(dqtotal_pop\(dq : 17990455 } { \(dq_id\(dq : \(dqFL\(dq, \(dqtotal_pop\(dq : 12937284 } { \(dq_id\(dq : \(dqTX\(dq, \(dqtotal_pop\(dq : 16986510 } { \(dq_id\(dq : \(dqIL\(dq, \(dqtotal_pop\(dq : 11430472 } { \(dq_id\(dq : \(dqCA\(dq, \(dqtotal_pop\(dq : 29760021 } .EE .UNINDENT .UNINDENT .sp The above aggregation pipeline is build from two pipeline operators: \fB$group\fP and \fB$match\fP\&. .sp The \fB$group\fP pipeline operator requires _id field where we specify grouping; remaining fields specify how to generate composite value and must use one of the group aggregation functions: \fB$addToSet\fP, \fB$first\fP, \fB$last\fP, \fB$max\fP, \fB$min\fP, \fB$avg\fP, \fB$push\fP, \fB$sum\fP\&. The \fB$match\fP pipeline operator syntax is the same as the read operation query syntax. .sp The \fB$group\fP process reads all documents and for each state it creates a separate document, for example: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqWA\(dq, \(dqtotal_pop\(dq : 4866692 } .EE .UNINDENT .UNINDENT .sp The \fBtotal_pop\fP field uses the $sum aggregation function to sum the values of all pop fields in the source documents. .sp Documents created by \fB$group\fP are piped to the \fB$match\fP pipeline operator. It returns the documents with the value of \fBtotal_pop\fP field greater than or equal to 10 million. .SS Average City Population by State .sp To get the first three states with the greatest average population per city, use the following aggregation: .INDENT 0.0 .INDENT 3.5 .sp .EX pipeline = BCON_NEW (\(dqpipeline\(dq, \(dq[\(dq, \(dq{\(dq, \(dq$group\(dq, \(dq{\(dq, \(dq_id\(dq, \(dq{\(dq, \(dqstate\(dq, \(dq$state\(dq, \(dqcity\(dq, \(dq$city\(dq, \(dq}\(dq, \(dqpop\(dq, \(dq{\(dq, \(dq$sum\(dq, \(dq$pop\(dq, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$group\(dq, \(dq{\(dq, \(dq_id\(dq, \(dq$_id.state\(dq, \(dqavg_city_pop\(dq, \(dq{\(dq, \(dq$avg\(dq, \(dq$pop\(dq, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$sort\(dq, \(dq{\(dq, \(dqavg_city_pop\(dq, BCON_INT32 (\-1), \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$limit\(dq, BCON_INT32 (3) \(dq}\(dq, \(dq]\(dq); .EE .UNINDENT .UNINDENT .sp This aggregate pipeline produces: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqDC\(dq, \(dqavg_city_pop\(dq : 303450.0 } { \(dq_id\(dq : \(dqFL\(dq, \(dqavg_city_pop\(dq : 27942.29805615551 } { \(dq_id\(dq : \(dqCA\(dq, \(dqavg_city_pop\(dq : 27735.341099720412 } .EE .UNINDENT .UNINDENT .sp The above aggregation pipeline is build from three pipeline operators: \fB$group\fP, \fB$sort\fP and \fB$limit\fP\&. .sp The first \fB$group\fP operator creates the following documents: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : { \(dqstate\(dq : \(dqWY\(dq, \(dqcity\(dq : \(dqSmoot\(dq }, \(dqpop\(dq : 414 } .EE .UNINDENT .UNINDENT .sp Note, that the \fB$group\fP operator can\(aqt use nested documents except the \fB_id\fP field. .sp The second \fB$group\fP uses these documents to create the following documents: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqFL\(dq, \(dqavg_city_pop\(dq : 27942.29805615551 } .EE .UNINDENT .UNINDENT .sp These documents are sorted by the \fBavg_city_pop\fP field in descending order. Finally, the \fB$limit\fP pipeline operator returns the first 3 documents from the sorted set. .SS \(dqdistinct\(dq and \(dqmapReduce\(dq .sp This document provides some practical, simple, examples to demonstrate the \fBdistinct\fP and \fBmapReduce\fP commands. .SS Setup .sp First we\(aqll write some code to insert sample data: .sp doc\-common\-insert.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* Don\(aqt try to compile this file on its own. It\(aqs meant to be #included by example code */ /* Insert some sample data */ bool insert_data (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; enum N { ndocs = 4 }; bson_t *docs[ndocs]; bson_error_t error; int i = 0; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); docs[0] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (1.0), \(dqtags\(dq, \(dq[\(dq, \(dqdog\(dq, \(dqcat\(dq, \(dq]\(dq); docs[1] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqcat\(dq, \(dq]\(dq); docs[2] = BCON_NEW ( \(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqmouse\(dq, \(dqcat\(dq, \(dqdog\(dq, \(dq]\(dq); docs[3] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (3.0), \(dqtags\(dq, \(dq[\(dq, \(dq]\(dq); for (i = 0; i < ndocs; i++) { mongoc_bulk_operation_insert (bulk, docs[i]); bson_destroy (docs[i]); docs[i] = NULL; } ret = mongoc_bulk_operation_execute (bulk, NULL, &error); if (!ret) { fprintf (stderr, \(dqError inserting data: %s\en\(dq, error.message); } mongoc_bulk_operation_destroy (bulk); return ret; } /* A helper which we\(aqll use a lot later on */ void print_res (const bson_t *reply) { char *str; BSON_ASSERT (reply); str = bson_as_canonical_extended_json (reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } .EE .UNINDENT .UNINDENT .SS \(dqdistinct\(dq command .sp This is how to use the \fBdistinct\fP command to get the distinct values of \fBx\fP which are greater than \fB1\fP: .sp distinct.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool distinct (mongoc_database_t *database) { bson_t *command; bson_t reply; bson_error_t error; bool res; bson_iter_t iter; bson_iter_t array_iter; double val; command = BCON_NEW (\(dqdistinct\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqkey\(dq, BCON_UTF8 (\(dqx\(dq), \(dqquery\(dq, \(dq{\(dq, \(dqx\(dq, \(dq{\(dq, \(dq$gt\(dq, BCON_DOUBLE (1.0), \(dq}\(dq, \(dq}\(dq); res = mongoc_database_command_simple (database, command, NULL, &reply, &error); if (!res) { fprintf (stderr, \(dqError with distinct: %s\en\(dq, error.message); goto cleanup; } /* Do something with reply (in this case iterate through the values) */ if (!(bson_iter_init_find (&iter, &reply, \(dqvalues\(dq) && BSON_ITER_HOLDS_ARRAY (&iter) && bson_iter_recurse (&iter, &array_iter))) { fprintf (stderr, \(dqCouldn\(aqt extract \e\(dqvalues\e\(dq field from response\en\(dq); goto cleanup; } while (bson_iter_next (&array_iter)) { if (BSON_ITER_HOLDS_DOUBLE (&array_iter)) { val = bson_iter_double (&array_iter); printf (\(dqNext double: %f\en\(dq, val); } } cleanup: /* cleanup */ bson_destroy (command); bson_destroy (&reply); return res; } .EE .UNINDENT .UNINDENT .SS \(dqmapReduce\(dq \- basic example .sp A simple example using the map reduce framework. It simply adds up the number of occurrences of each \(dqtag\(dq. .sp First define the \fBmap\fP and \fBreduce\fP functions: .sp constants.c .INDENT 0.0 .INDENT 3.5 .sp .EX const char *const COLLECTION_NAME = \(dqthings\(dq; /* Our map function just emits a single (key, 1) pair for each tag in the array: */ const char *const MAPPER = \(dqfunction () {\(dq \(dqthis.tags.forEach(function(z) {\(dq \(dqemit(z, 1);\(dq \(dq});\(dq \(dq}\(dq; /* The reduce function sums over all of the emitted values for a given key: */ const char *const REDUCER = \(dqfunction (key, values) {\(dq \(dqvar total = 0;\(dq \(dqfor (var i = 0; i < values.length; i++) {\(dq \(dqtotal += values[i];\(dq \(dq}\(dq \(dqreturn total;\(dq \(dq}\(dq; /* Note We can\(aqt just return values.length as the reduce function might be called iteratively on the results of other reduce steps. */ .EE .UNINDENT .UNINDENT .sp Run the \fBmapReduce\fP command. Use the generic command helpers (e.g. \fI\%mongoc_database_command_simple()\fP). Do not the read command helpers (e.g. \fI\%mongoc_database_read_command_with_opts()\fP) because they are considered retryable read operations. If retryable reads are enabled, those operations will retry once on a retryable error, giving undesirable behavior for \fBmapReduce\fP\&. .sp map\-reduce\-basic.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool map_reduce_basic (mongoc_database_t *database) { bson_t reply; bson_t *command; bool res; bson_error_t error; mongoc_cursor_t *cursor; const bson_t *doc; bool query_done = false; const char *out_collection_name = \(dqoutCollection\(dq; mongoc_collection_t *out_collection; /* Empty find query */ bson_t find_query = BSON_INITIALIZER; /* Construct the mapReduce command */ /* Other arguments can also be specified here, like \(dqquery\(dq or \(dqlimit\(dq and so on */ command = BCON_NEW (\(dqmapReduce\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqmap\(dq, BCON_CODE (MAPPER), \(dqreduce\(dq, BCON_CODE (REDUCER), \(dqout\(dq, BCON_UTF8 (out_collection_name)); res = mongoc_database_command_simple (database, command, NULL, &reply, &error); if (!res) { fprintf (stderr, \(dqMapReduce failed: %s\en\(dq, error.message); goto cleanup; } /* Do something with the reply (it doesn\(aqt contain the mapReduce results) */ print_res (&reply); /* Now we\(aqll query outCollection to see what the results are */ out_collection = mongoc_database_get_collection (database, out_collection_name); cursor = mongoc_collection_find_with_opts ( out_collection, &find_query, NULL, NULL); query_done = true; /* Do something with the results */ while (mongoc_cursor_next (cursor, &doc)) { print_res (doc); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqERROR: %s\en\(dq, error.message); res = false; goto cleanup; } cleanup: /* cleanup */ if (query_done) { mongoc_cursor_destroy (cursor); mongoc_collection_destroy (out_collection); } bson_destroy (&reply); bson_destroy (command); return res; } .EE .UNINDENT .UNINDENT .SS \(dqmapReduce\(dq \- more complicated example .sp You must have replica set running for this. .sp In this example we contact a secondary in the replica set and do an \(dqinline\(dq map reduce, so the results are returned immediately: .sp map\-reduce\-advanced.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool map_reduce_advanced (mongoc_database_t *database) { bson_t *command; bson_error_t error; bool res = true; mongoc_cursor_t *cursor; mongoc_read_prefs_t *read_pref; const bson_t *doc; /* Construct the mapReduce command */ /* Other arguments can also be specified here, like \(dqquery\(dq or \(dqlimit\(dq and so on */ /* Read the results inline from a secondary replica */ command = BCON_NEW (\(dqmapReduce\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqmap\(dq, BCON_CODE (MAPPER), \(dqreduce\(dq, BCON_CODE (REDUCER), \(dqout\(dq, \(dq{\(dq, \(dqinline\(dq, \(dq1\(dq, \(dq}\(dq); read_pref = mongoc_read_prefs_new (MONGOC_READ_SECONDARY); cursor = mongoc_database_command ( database, MONGOC_QUERY_NONE, 0, 0, 0, command, NULL, read_pref); /* Do something with the results */ while (mongoc_cursor_next (cursor, &doc)) { print_res (doc); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqERROR: %s\en\(dq, error.message); res = false; } mongoc_cursor_destroy (cursor); mongoc_read_prefs_destroy (read_pref); bson_destroy (command); return res; } .EE .UNINDENT .UNINDENT .SS Running the Examples .sp Here\(aqs how to run the example code .sp basic\-aggregation.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* * Copyright 2016 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the \(dqLicense\(dq); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE\-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an \(dqAS IS\(dq BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include #include \(dqconstants.c\(dq #include \(dq../doc\-common\-insert.c\(dq #include \(dqdistinct.c\(dq #include \(dqmap\-reduce\-basic.c\(dq #include \(dqmap\-reduce\-advanced.c\(dq int main (int argc, char *argv[]) { mongoc_database_t *database = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *collection = NULL; mongoc_uri_t *uri = NULL; bson_error_t error; char *host_and_port = NULL; int exit_code = EXIT_FAILURE; if (argc != 2) { fprintf (stderr, \(dqusage: %s CONNECTION\-STRING\en\(dq, argv[0]); fprintf (stderr, \(dqthe connection string can be of the following forms:\en\(dq); fprintf (stderr, \(dqlocalhost\et\et\et\etlocal machine\en\(dq); fprintf (stderr, \(dqlocalhost:27018\et\et\et\etlocal machine on port 27018\en\(dq); fprintf (stderr, \(dqmongodb://user:pass@localhost:27017\et\(dq \(dqlocal machine on port 27017, and authenticate with username \(dq \(dquser and password pass\en\(dq); return exit_code; } mongoc_init (); if (strncmp (argv[1], \(dqmongodb://\(dq, 10) == 0) { host_and_port = bson_strdup (argv[1]); } else { host_and_port = bson_strdup_printf (\(dqmongodb://%s\(dq, argv[1]); } uri = mongoc_uri_new_with_error (host_and_port, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, host_and_port, error.message); goto cleanup; } client = mongoc_client_new_from_uri (uri); if (!client) { goto cleanup; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqtest\(dq); collection = mongoc_database_get_collection (database, COLLECTION_NAME); printf (\(dqInserting data\en\(dq); if (!insert_data (collection)) { goto cleanup; } printf (\(dqdistinct\en\(dq); if (!distinct (database)) { goto cleanup; } printf (\(dqmap reduce\en\(dq); if (!map_reduce_basic (database)) { goto cleanup; } printf (\(dqmore complicated map reduce\en\(dq); if (!map_reduce_advanced (database)) { goto cleanup; } exit_code = EXIT_SUCCESS; cleanup: if (collection) { mongoc_collection_destroy (collection); } if (database) { mongoc_database_destroy (database); } if (client) { mongoc_client_destroy (client); } if (uri) { mongoc_uri_destroy (uri); } if (host_and_port) { bson_free (host_and_port); } mongoc_cleanup (); return exit_code; } .EE .UNINDENT .UNINDENT .sp If you want to try the advanced map reduce example with a secondary, start a replica set (instructions for how to do this can be found \fI\%here\fP). .sp Otherwise, just start an instance of MongoDB: .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongod .EE .UNINDENT .UNINDENT .sp Now compile and run the example program: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cd examples/basic_aggregation/ $ gcc \-Wall \-o agg\-example basic\-aggregation.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./agg\-example localhost Inserting data distinct Next double: 2.000000 Next double: 3.000000 map reduce { \(dqresult\(dq : \(dqoutCollection\(dq, \(dqtimeMillis\(dq : 155, \(dqcounts\(dq : { \(dqinput\(dq : 84, \(dqemit\(dq : 126, \(dqreduce\(dq : 3, \(dqoutput\(dq : 3 }, \(dqok\(dq : 1 } { \(dq_id\(dq : \(dqcat\(dq, \(dqvalue\(dq : 63 } { \(dq_id\(dq : \(dqdog\(dq, \(dqvalue\(dq : 42 } { \(dq_id\(dq : \(dqmouse\(dq, \(dqvalue\(dq : 21 } more complicated map reduce { \(dqresults\(dq : [ { \(dq_id\(dq : \(dqcat\(dq, \(dqvalue\(dq : 63 }, { \(dq_id\(dq : \(dqdog\(dq, \(dqvalue\(dq : 42 }, { \(dq_id\(dq : \(dqmouse\(dq, \(dqvalue\(dq : 21 } ], \(dqtimeMillis\(dq : 14, \(dqcounts\(dq : { \(dqinput\(dq : 84, \(dqemit\(dq : 126, \(dqreduce\(dq : 3, \(dqoutput\(dq : 3 }, \(dqok\(dq : 1 } .EE .UNINDENT .UNINDENT .SS Using libmongoc in a Microsoft Visual Studio project .sp \fI\%Download and install libmongoc on your system\fP, then open Visual Studio, select \(dqFile→New→Project...\(dq, and create a new Win32 Console Application. [image] .sp Remember to switch the platform from 32\-bit to 64\-bit: [image] .sp Right\-click on your console application in the Solution Explorer and select \(dqProperties\(dq. Choose to edit properties for \(dqAll Configurations\(dq, expand the \(dqC/C++\(dq options and choose \(dqGeneral\(dq. Add to the \(dqAdditional Include Directories\(dq these paths: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\emongo\-c\-driver\einclude\elibbson\-1.0 C:\emongo\-c\-driver\einclude\elibmongoc\-1.0 .EE .UNINDENT .UNINDENT [image] .sp (If you chose a different \fB$PREFIX\fP \fI\%when you installed mongo\-c\-driver\fP, your include paths will be different.) .sp Also in the Properties dialog, expand the \(dqLinker\(dq options and choose \(dqInput\(dq, and add to the \(dqAdditional Dependencies\(dq these libraries: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\emongo\-c\-driver\elib\ebson\-1.0.lib C:\emongo\-c\-driver\elib\emongoc\-1.0.lib .EE .UNINDENT .UNINDENT [image] .sp Adding these libraries as dependencies provides linker symbols to build your application, but to actually run it, libbson\(aqs and libmongoc\(aqs DLLs must be in your executable path. Select \(dqDebugging\(dq in the Properties dialog, and set the \(dqEnvironment\(dq option to: .INDENT 0.0 .INDENT 3.5 .sp .EX PATH=c:/mongo\-c\-driver/bin .EE .UNINDENT .UNINDENT [image] .sp Finally, include \(dqmongoc/mongoc.h\(dq in your project\(aqs \(dqstdafx.h\(dq: .INDENT 0.0 .INDENT 3.5 .sp .EX #include .EE .UNINDENT .UNINDENT .SS Static linking .sp Following the instructions above, you have dynamically linked your application to the libbson and libmongoc DLLs. This is usually the right choice. If you want to link statically instead, update your \(dqAdditional Dependencies\(dq list by removing \fBbson\-1.0.lib\fP and \fBmongoc\-1.0.lib\fP and replacing them with these libraries: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\emongo\-c\-driver\elib\ebson\-static\-1.0.lib C:\emongo\-c\-driver\elib\emongoc\-static\-1.0.lib ws2_32.lib Secur32.lib Crypt32.lib BCrypt.lib .EE .UNINDENT .UNINDENT [image] .sp (To explain the purpose of each library: \fBbson\-static\-1.0.lib\fP and \fBmongoc\-static\-1.0.lib\fP are static archives of the driver code. The socket library \fBws2_32\fP is required by libbson, which uses the socket routine \fBgethostname\fP to help guarantee ObjectId uniqueness. The \fBBCrypt\fP library is used by libmongoc for TLS connections to MongoDB, and \fBSecur32\fP and \fBCrypt32\fP are required for enterprise authentication methods like Kerberos.) .sp Finally, define two preprocessor symbols before including \fBmongoc/mongoc.h\fP in your \fBstdafx.h\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX #define BSON_STATIC #define MONGOC_STATIC #include .EE .UNINDENT .UNINDENT .sp Making these changes to your project is only required for static linking; for most people, the dynamic\-linking instructions above are preferred. .SS Next Steps .sp Now you can build and debug applications in Visual Studio that use libbson and libmongoc. Proceed to \fI\%Making a Connection\fP in the tutorial to learn how connect to MongoDB and perform operations. .SS Manage Collection Indexes .sp To create indexes on a MongoDB collection, use \fI\%mongoc_collection_create_indexes_with_opts()\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX // \(gakeys\(ga represents an ascending index on field \(gax\(ga. bson_t *keys = BCON_NEW (\(dqx\(dq, BCON_INT32 (1)); mongoc_index_model_t *im = mongoc_index_model_new (keys, NULL /* opts */); if (mongoc_collection_create_indexes_with_opts ( coll, &im, 1, NULL /* opts */, NULL /* reply */, &error)) { printf (\(dqSuccessfully created index\en\(dq); } else { bson_destroy (keys); HANDLE_ERROR (\(dqFailed to create index: %s\(dq, error.message); } bson_destroy (keys); .EE .UNINDENT .UNINDENT .sp To list indexes, use \fI\%mongoc_collection_find_indexes_with_opts()\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_cursor_t *cursor = mongoc_collection_find_indexes_with_opts (coll, NULL /* opts */); printf (\(dqListing indexes:\en\(dq); const bson_t *got; while (mongoc_cursor_next (cursor, &got)) { char *got_str = bson_as_canonical_extended_json (got, NULL); printf (\(dq %s\en\(dq, got_str); bson_free (got_str); } if (mongoc_cursor_error (cursor, &error)) { mongoc_cursor_destroy (cursor); HANDLE_ERROR (\(dqFailed to list indexes: %s\(dq, error.message); } mongoc_cursor_destroy (cursor); .EE .UNINDENT .UNINDENT .sp To drop an index, use \fI\%mongoc_collection_drop_index_with_opts()\fP\&. The index name may be obtained from the \fBkeys\fP document with \fI\%mongoc_collection_keys_to_index_string()\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t *keys = BCON_NEW (\(dqx\(dq, BCON_INT32 (1)); char *index_name = mongoc_collection_keys_to_index_string (keys); if (mongoc_collection_drop_index_with_opts ( coll, index_name, NULL /* opts */, &error)) { printf (\(dqSuccessfully dropped index\en\(dq); } else { bson_free (index_name); bson_destroy (keys); HANDLE_ERROR (\(dqFailed to drop index: %s\(dq, error.message); } bson_free (index_name); bson_destroy (keys); .EE .UNINDENT .UNINDENT .sp For a full example, see \fI\%example\-manage\-collection\-indexes.c\fP\&. .SS Manage Atlas Search Indexes .sp To create an Atlas Search Index, use the \fBcreateSearchIndexes\fP command: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t cmd; // Create command. { char *cmd_str = bson_strdup_printf ( BSON_STR ({ \(dqcreateSearchIndexes\(dq : \(dq%s\(dq, \(dqindexes\(dq : [ { \(dqdefinition\(dq : {\(dqmappings\(dq : {\(dqdynamic\(dq : false}}, \(dqname\(dq : \(dqtest\-index\(dq } ] }), collname); ASSERT (bson_init_from_json (&cmd, cmd_str, \-1, &error)); bson_free (cmd_str); } if (!mongoc_collection_command_simple ( coll, &cmd, NULL /* read_prefs */, NULL /* reply */, &error)) { bson_destroy (&cmd); HANDLE_ERROR (\(dqFailed to run createSearchIndexes: %s\(dq, error.message); } printf (\(dqCreated index: \e\(dqtest\-index\e\(dq\en\(dq); bson_destroy (&cmd); .EE .UNINDENT .UNINDENT .sp To list Atlas Search Indexes, use the \fB$listSearchIndexes\fP aggregation stage: .INDENT 0.0 .INDENT 3.5 .sp .EX const char *pipeline_str = BSON_STR ({\(dqpipeline\(dq : [ {\(dq$listSearchIndexes\(dq : {}} ]}); bson_t pipeline; ASSERT (bson_init_from_json (&pipeline, pipeline_str, \-1, &error)); mongoc_cursor_t *cursor = mongoc_collection_aggregate (coll, MONGOC_QUERY_NONE, &pipeline, NULL /* opts */, NULL /* read_prefs */); printf (\(dqListing indexes:\en\(dq); const bson_t *got; while (mongoc_cursor_next (cursor, &got)) { char *got_str = bson_as_canonical_extended_json (got, NULL); printf (\(dq %s\en\(dq, got_str); bson_free (got_str); } if (mongoc_cursor_error (cursor, &error)) { bson_destroy (&pipeline); mongoc_cursor_destroy (cursor); HANDLE_ERROR (\(dqFailed to run $listSearchIndexes: %s\(dq, error.message); } bson_destroy (&pipeline); mongoc_cursor_destroy (cursor); .EE .UNINDENT .UNINDENT .sp To update an Atlas Search Index, use the \fBupdateSearchIndex\fP command: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t cmd; // Create command. { char *cmd_str = bson_strdup_printf ( BSON_STR ({ \(dqupdateSearchIndex\(dq : \(dq%s\(dq, \(dqdefinition\(dq : {\(dqmappings\(dq : {\(dqdynamic\(dq : true}}, \(dqname\(dq : \(dqtest\-index\(dq }), collname); ASSERT (bson_init_from_json (&cmd, cmd_str, \-1, &error)); bson_free (cmd_str); } if (!mongoc_collection_command_simple ( coll, &cmd, NULL /* read_prefs */, NULL /* reply */, &error)) { bson_destroy (&cmd); HANDLE_ERROR (\(dqFailed to run updateSearchIndex: %s\(dq, error.message); } printf (\(dqUpdated index: \e\(dqtest\-index\e\(dq\en\(dq); bson_destroy (&cmd); .EE .UNINDENT .UNINDENT .sp To drop an Atlas Search Index, use the \fBdropSearchIndex\fP command: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t cmd; // Create command. { char *cmd_str = bson_strdup_printf ( BSON_STR ({\(dqdropSearchIndex\(dq : \(dq%s\(dq, \(dqname\(dq : \(dqtest\-index\(dq}), collname); ASSERT (bson_init_from_json (&cmd, cmd_str, \-1, &error)); bson_free (cmd_str); } if (!mongoc_collection_command_simple ( coll, &cmd, NULL /* read_prefs */, NULL /* reply */, &error)) { bson_destroy (&cmd); HANDLE_ERROR (\(dqFailed to run dropSearchIndex: %s\(dq, error.message); } printf (\(dqDropped index: \e\(dqtest\-index\e\(dq\en\(dq); bson_destroy (&cmd); .EE .UNINDENT .UNINDENT .sp For a full example, see \fI\%example\-manage\-search\-indexes.c\fP\&. .SS Aids for Debugging .SS GDB .sp This repository contains a \fB\&.gdbinit\fP file that contains helper functions to aid debugging of data structures. GDB will load this file \fI\%automatically\fP if you have added the directory which contains the \fI\&.gdbinit\fP file to GDB\(aqs \fI\%auto\-load safe\-path\fP, \fIand\fP you start GDB from the directory which holds the \fI\&.gdbinit\fP file. .sp You can see the safe\-path with \fBshow auto\-load safe\-path\fP on a GDB prompt. You can configure it by setting it in \fB~/.gdbinit\fP with: .INDENT 0.0 .INDENT 3.5 .sp .EX add\-auto\-load\-safe\-path /path/to/mongo\-c\-driver .EE .UNINDENT .UNINDENT .sp If you haven\(aqt added the path to your auto\-load safe\-path, or start GDB in another directory, load the file with: .INDENT 0.0 .INDENT 3.5 .sp .EX source path/to/mongo\-c\-driver/.gdbinit .EE .UNINDENT .UNINDENT .sp The \fB\&.gdbinit\fP file defines the \fBprintbson\fP function, which shows the contents of a \fBbson_t *\fP variable. If you have a local \fBbson_t\fP, then you must prefix the variable with a \fI&\fP\&. .sp An example GDB session looks like: .INDENT 0.0 .INDENT 3.5 .sp .EX (gdb) printbson bson ALLOC [0x555556cd7310 + 0] (len=475) { \(aqbool\(aq : true, \(aqint32\(aq : NumberInt(\(dq42\(dq), \(aqint64\(aq : NumberLong(\(dq3000000042\(dq), \(aqstring\(aq : \(dqStŕìñg\(dq, \(aqobjectId\(aq : ObjectID(\(dq5A1442F3122D331C3C6757E1\(dq), \(aqutcDateTime\(aq : UTCDateTime(1511277299031), \(aqarrayOfInts\(aq : [ \(aq0\(aq : NumberInt(\(dq1\(dq), \(aq1\(aq : NumberInt(\(dq2\(dq) ], \(aqembeddedDocument\(aq : { \(aqarrayOfStrings\(aq : [ \(aq0\(aq : \(dqone\(dq, \(aq1\(aq : \(dqtwo\(dq ], \(aqdouble\(aq : 2.718280, \(aqnotherDoc\(aq : { \(aqtrue\(aq : NumberInt(\(dq1\(dq), \(aqfalse\(aq : false } }, \(aqbinary\(aq : Binary(\(dq02\(dq, \(dq3031343532333637\(dq), \(aqregex\(aq : Regex(\(dq@[a\-z]+@\(dq, \(dqim\(dq), \(aqnull\(aq : null, \(aqjs\(aq : JavaScript(\(dqprint foo\(dq), \(aqjsws\(aq : JavaScript(\(dqprint foo\(dq) with scope: { \(aqf\(aq : NumberInt(\(dq42\(dq), \(aqa\(aq : [ \(aq0\(aq : 3.141593, \(aq1\(aq : 2.718282 ] }, \(aqtimestamp\(aq : Timestamp(4294967295, 4294967295), \(aqdouble\(aq : 3.141593 } .EE .UNINDENT .UNINDENT .SS LLDB .sp The mongo\-c\-driver repository contains a script \fBlldb_bson.py\fP that can be imported into an LLDB sessions and allows rich inspection of BSON values. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The \fBlldb_bson.py\fP module requires an LLDB with Python 3.8 or newer. .UNINDENT .UNINDENT .sp To activate the script, import it from the LLDB command line: .INDENT 0.0 .INDENT 3.5 .sp .EX (lldb) command script import /path/to/mongo\-c\-driver/lldb_bson.py .EE .UNINDENT .UNINDENT .sp Upon success, the message \fBlldb_bson is ready\fP will be printed to the LLDB console. .sp The import of this script can be made automatic by adding the command to an \fB\&.lldbinit\fP file. For example: Create a file \fB~/.lldbinit\fP containing: .INDENT 0.0 .INDENT 3.5 .sp .EX command script import /path/to/mongo\-c\-driver/lldb_bson.py .EE .UNINDENT .UNINDENT .sp The docstring at the top of the \fBlldb_bson.py\fP file contains more information on the capabilities of the module. .SS Debug assertions .sp To enable runtime debug assertions, configure with \fB\-DENABLE_DEBUG_ASSERTIONS=ON\fP\&. .SS In\-Use Encryption .sp In\-Use Encryption consists of two features: .SS Client\-Side Field Level Encryption .sp New in MongoDB 4.2, Client\-Side Field Level Encryption (also referred to as CSFLE) allows administrators and developers to encrypt specific data fields in addition to other MongoDB encryption features. .sp With CSFLE, developers can encrypt fields client side without any server\-side configuration or directives. CSFLE supports workloads where applications must guarantee that unauthorized parties, including server administrators, cannot read the encrypted data. .sp Automatic encryption, where sensitive fields in commands are encrypted automatically, requires an Enterprise\-only dependency for Query Analysis. See \fI\%In\-Use Encryption\fP for more information. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf The MongoDB Manual for \fI\%Client\-Side Field Level Encryption\fP .fi .sp .UNINDENT .UNINDENT .SS Automatic Client\-Side Field Level Encryption .sp Automatic encryption is enabled by calling \fI\%mongoc_client_enable_auto_encryption()\fP on a \fI\%mongoc_client_t\fP\&. The following examples show how to set up automatic encryption using \fI\%mongoc_client_encryption_t\fP to create a new encryption data key. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Automatic encryption requires MongoDB 4.2 enterprise or a MongoDB 4.2 Atlas cluster. The community version of the server supports automatic decryption as well as \fI\%Explicit Encryption\fP\&. .UNINDENT .UNINDENT .SS Providing Local Automatic Encryption Rules .sp The following example shows how to specify automatic encryption rules using a schema map set with \fI\%mongoc_auto_encryption_opts_set_schema_map()\fP\&. The automatic encryption rules are expressed using a strict subset of the JSON Schema syntax. .sp Supplying a schema map provides more security than relying on JSON Schemas obtained from the server. It protects against a malicious server advertising a false JSON Schema, which could trick the client into sending unencrypted data that should be encrypted. .sp JSON Schemas supplied in the schema map only apply to configuring automatic encryption. Other validation rules in the JSON schema will not be enforced by the driver and will result in an error: .sp client\-side\-encryption\-schema\-map.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* Helper method to create a new data key in the key vault, a schema to use that * key, and writes the schema to a file for later use. */ static bool create_schema_file (bson_t *kms_providers, const char *keyvault_db, const char *keyvault_coll, mongoc_client_t *keyvault_client, bson_error_t *error) { mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; bson_value_t datakey_id = {0}; char *keyaltnames[] = {\(dqmongoc_encryption_example_1\(dq}; bson_t *schema = NULL; char *schema_string = NULL; size_t schema_string_len; FILE *outfile = NULL; bool ret = false; client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace ( client_encryption_opts, keyvault_db, keyvault_coll); mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, keyvault_client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, error); if (!client_encryption) { goto fail; } /* Create a new data key and json schema for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames ( datakey_opts, keyaltnames, 1); if (!mongoc_client_encryption_create_datakey ( client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, error)) { goto fail; } /* Create a schema describing that \(dqencryptedField\(dq is a string encrypted * with the newly created data key using deterministic encryption. */ schema = BCON_NEW (\(dqproperties\(dq, \(dq{\(dq, \(dqencryptedField\(dq, \(dq{\(dq, \(dqencrypt\(dq, \(dq{\(dq, \(dqkeyId\(dq, \(dq[\(dq, BCON_BIN (datakey_id.value.v_binary.subtype, datakey_id.value.v_binary.data, datakey_id.value.v_binary.data_len), \(dq]\(dq, \(dqbsonType\(dq, \(dqstring\(dq, \(dqalgorithm\(dq, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dqbsonType\(dq, \(dqobject\(dq); /* Use canonical JSON so that other drivers and tools will be * able to parse the MongoDB extended JSON file. */ schema_string = bson_as_canonical_extended_json (schema, &schema_string_len); outfile = fopen (\(dqjsonSchema.json\(dq, \(dqw\(dq); if (0 == fwrite (schema_string, sizeof (char), schema_string_len, outfile)) { fprintf (stderr, \(dqfailed to write to file\en\(dq); goto fail; } ret = true; fail: mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_free (schema_string); bson_destroy (schema); bson_value_destroy (&datakey_id); if (outfile) { fclose (outfile); } return ret; } /* This example demonstrates how to use automatic encryption with a client\-side * schema map using the enterprise version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_json_reader_t *reader = NULL; bson_t schema = BSON_INITIALIZER; bson_t *schema_map = NULL; /* The MongoClient used to access the key vault (keyvault_namespace). */ mongoc_client_t *keyvault_client = NULL; mongoc_collection_t *keyvault_coll = NULL; mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; bson_t *to_insert = NULL; mongoc_client_t *unencrypted_client = NULL; mongoc_collection_t *unencrypted_coll = NULL; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); /* Set up the key vault for this example. */ keyvault_client = mongoc_client_new ( \(dqmongodb://localhost/?appname=client\-side\-encryption\-keyvault\(dq); BSON_ASSERT (keyvault_client); keyvault_coll = mongoc_client_get_collection ( keyvault_client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts (keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } /* Create a new data key and a schema using it for encryption. Save the * schema to the file jsonSchema.json */ ret = create_schema_file ( kms_providers, KEYVAULT_DB, KEYVAULT_COLL, keyvault_client, &error); if (!ret) { goto fail; } /* Load the JSON Schema and construct the local schema_map option. */ reader = bson_json_reader_new_from_file (\(dqjsonSchema.json\(dq, &error); if (!reader) { goto fail; } bson_json_reader_read (reader, &schema, &error); /* Construct the schema map, mapping the namespace of the collection to the * schema describing encryption. */ schema_map = BCON_NEW (ENCRYPTED_DB \(dq.\(dq ENCRYPTED_COLL, BCON_DOCUMENT (&schema)); auto_encryption_opts = mongoc_auto_encryption_opts_new (); mongoc_auto_encryption_opts_set_keyvault_client (auto_encryption_opts, keyvault_client); mongoc_auto_encryption_opts_set_keyvault_namespace ( auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts, kms_providers); mongoc_auto_encryption_opts_set_schema_map (auto_encryption_opts, schema_map); client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); BSON_ASSERT (client); /* Enable automatic encryption. It will determine that encryption is * necessary from the schema map instead of relying on the server to provide * a schema. */ ret = mongoc_client_enable_auto_encryption ( client, auto_encryption_opts, &error); if (!ret) { goto fail; } coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); to_insert = BCON_NEW (\(dqencryptedField\(dq, \(dq123456789\(dq); ret = mongoc_collection_insert_one ( coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } printf (\(dqdecrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); unencrypted_client = mongoc_client_new ( \(dqmongodb://localhost/?appname=client\-side\-encryption\-unencrypted\(dq); BSON_ASSERT (unencrypted_client); unencrypted_coll = mongoc_client_get_collection ( unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL); printf (\(dqencrypted document: \(dq); if (!print_one_document (unencrypted_coll, &error)) { goto fail; } printf (\(dq\en\(dq); exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); bson_json_reader_destroy (reader); mongoc_auto_encryption_opts_destroy (auto_encryption_opts); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); mongoc_collection_destroy (unencrypted_coll); mongoc_client_destroy (unencrypted_client); mongoc_client_destroy (keyvault_client); bson_destroy (&schema); bson_destroy (schema_map); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Server\-Side Field Level Encryption Enforcement .sp The MongoDB 4.2 server supports using schema validation to enforce encryption of specific fields in a collection. This schema validation will prevent an application from inserting unencrypted values for any fields marked with the \(dqencrypt\(dq JSON schema keyword. .sp The following example shows how to set up automatic encryption using \fI\%mongoc_client_encryption_t\fP to create a new encryption data key and create a collection with the necessary JSON Schema: .sp client\-side\-encryption\-server\-schema.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* Helper method to create and return a JSON schema to use for encryption. The caller will use the returned schema for server\-side encryption validation. */ static bson_t * create_schema (bson_t *kms_providers, const char *keyvault_db, const char *keyvault_coll, mongoc_client_t *keyvault_client, bson_error_t *error) { mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; bson_value_t datakey_id = {0}; char *keyaltnames[] = {\(dqmongoc_encryption_example_2\(dq}; bson_t *schema = NULL; client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace ( client_encryption_opts, keyvault_db, keyvault_coll); mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, keyvault_client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, error); if (!client_encryption) { goto fail; } /* Create a new data key and json schema for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames ( datakey_opts, keyaltnames, 1); if (!mongoc_client_encryption_create_datakey ( client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, error)) { goto fail; } /* Create a schema describing that \(dqencryptedField\(dq is a string encrypted * with the newly created data key using deterministic encryption. */ schema = BCON_NEW (\(dqproperties\(dq, \(dq{\(dq, \(dqencryptedField\(dq, \(dq{\(dq, \(dqencrypt\(dq, \(dq{\(dq, \(dqkeyId\(dq, \(dq[\(dq, BCON_BIN (datakey_id.value.v_binary.subtype, datakey_id.value.v_binary.data, datakey_id.value.v_binary.data_len), \(dq]\(dq, \(dqbsonType\(dq, \(dqstring\(dq, \(dqalgorithm\(dq, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dqbsonType\(dq, \(dqobject\(dq); fail: mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_value_destroy (&datakey_id); return schema; } /* This example demonstrates how to use automatic encryption with a server\-side * schema using the enterprise version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_json_reader_t *reader = NULL; bson_t *schema = NULL; /* The MongoClient used to access the key vault (keyvault_namespace). */ mongoc_client_t *keyvault_client = NULL; mongoc_collection_t *keyvault_coll = NULL; mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; bson_t *to_insert = NULL; mongoc_client_t *unencrypted_client = NULL; mongoc_collection_t *unencrypted_coll = NULL; bson_t *create_cmd = NULL; bson_t *create_cmd_opts = NULL; mongoc_write_concern_t *wc = NULL; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create * the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); /* Set up the key vault for this example. */ keyvault_client = mongoc_client_new ( \(dqmongodb://localhost/?appname=client\-side\-encryption\-keyvault\(dq); BSON_ASSERT (keyvault_client); keyvault_coll = mongoc_client_get_collection ( keyvault_client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts (keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } auto_encryption_opts = mongoc_auto_encryption_opts_new (); mongoc_auto_encryption_opts_set_keyvault_client (auto_encryption_opts, keyvault_client); mongoc_auto_encryption_opts_set_keyvault_namespace ( auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts, kms_providers); schema = create_schema ( kms_providers, KEYVAULT_DB, KEYVAULT_COLL, keyvault_client, &error); if (!schema) { goto fail; } client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); BSON_ASSERT (client); ret = mongoc_client_enable_auto_encryption ( client, auto_encryption_opts, &error); if (!ret) { goto fail; } coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); /* Create the collection with the encryption JSON Schema. */ create_cmd = BCON_NEW (\(dqcreate\(dq, ENCRYPTED_COLL, \(dqvalidator\(dq, \(dq{\(dq, \(dq$jsonSchema\(dq, BCON_DOCUMENT (schema), \(dq}\(dq); wc = mongoc_write_concern_new (); mongoc_write_concern_set_wmajority (wc, 0); create_cmd_opts = bson_new (); mongoc_write_concern_append (wc, create_cmd_opts); ret = mongoc_client_command_with_opts (client, ENCRYPTED_DB, create_cmd, NULL /* read prefs */, create_cmd_opts, NULL /* reply */, &error); if (!ret) { goto fail; } to_insert = BCON_NEW (\(dqencryptedField\(dq, \(dq123456789\(dq); ret = mongoc_collection_insert_one ( coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } printf (\(dqdecrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); unencrypted_client = mongoc_client_new ( \(dqmongodb://localhost/?appname=client\-side\-encryption\-unencrypted\(dq); BSON_ASSERT (unencrypted_client); unencrypted_coll = mongoc_client_get_collection ( unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL); printf (\(dqencrypted document: \(dq); if (!print_one_document (unencrypted_coll, &error)) { goto fail; } printf (\(dq\en\(dq); /* Expect a server\-side error if inserting with the unencrypted collection. */ ret = mongoc_collection_insert_one ( unencrypted_coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { printf (\(dqinsert with unencrypted collection failed: %s\en\(dq, error.message); memset (&error, 0, sizeof (error)); } exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); bson_json_reader_destroy (reader); mongoc_auto_encryption_opts_destroy (auto_encryption_opts); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); mongoc_collection_destroy (unencrypted_coll); mongoc_client_destroy (unencrypted_client); mongoc_client_destroy (keyvault_client); bson_destroy (schema); bson_destroy (create_cmd); bson_destroy (create_cmd_opts); mongoc_write_concern_destroy (wc); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Explicit Encryption .sp Explicit encryption is a MongoDB community feature and does not use \fI\%Query Analysis\fP (\fBmongocryptd\fP or \fBcrypt_shared\fP). Explicit encryption is provided by the \fI\%mongoc_client_encryption_t\fP class, for example: .sp client\-side\-encryption\-explicit.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* This example demonstrates how to use explicit encryption and decryption using * the community version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_t *schema = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; mongoc_collection_t *keyvault_coll = NULL; bson_t *to_insert = NULL; bson_t *create_cmd = NULL; bson_t *create_cmd_opts = NULL; mongoc_write_concern_t *wc = NULL; mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; char *keyaltnames[] = {\(dqmongoc_encryption_example_3\(dq}; bson_value_t datakey_id = {0}; bson_value_t encrypted_field = {0}; bson_value_t to_encrypt = {0}; mongoc_client_encryption_encrypt_opts_t *encrypt_opts = NULL; bson_value_t decrypted = {0}; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); /* The mongoc_client_t used to read/write application data. */ client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); /* Set up the key vault for this example. */ keyvault_coll = mongoc_client_get_collection (client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts (keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace ( client_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); /* Set a mongoc_client_t to use for reading/writing to the key vault. This * can be the same mongoc_client_t used by the main application. */ mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, &error); if (!client_encryption) { goto fail; } /* Create a new data key for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames ( datakey_opts, keyaltnames, 1); if (!mongoc_client_encryption_create_datakey ( client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, &error)) { goto fail; } /* Explicitly encrypt a field */ encrypt_opts = mongoc_client_encryption_encrypt_opts_new (); mongoc_client_encryption_encrypt_opts_set_algorithm ( encrypt_opts, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC); mongoc_client_encryption_encrypt_opts_set_keyid (encrypt_opts, &datakey_id); to_encrypt.value_type = BSON_TYPE_UTF8; to_encrypt.value.v_utf8.str = \(dq123456789\(dq; const size_t len = strlen (to_encrypt.value.v_utf8.str); BSON_ASSERT (bson_in_range_unsigned (uint32_t, len)); to_encrypt.value.v_utf8.len = (uint32_t) len; ret = mongoc_client_encryption_encrypt ( client_encryption, &to_encrypt, encrypt_opts, &encrypted_field, &error); if (!ret) { goto fail; } to_insert = bson_new (); BSON_APPEND_VALUE (to_insert, \(dqencryptedField\(dq, &encrypted_field); ret = mongoc_collection_insert_one ( coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } printf (\(dqencrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); /* Explicitly decrypt a field */ ret = mongoc_client_encryption_decrypt ( client_encryption, &encrypted_field, &decrypted, &error); if (!ret) { goto fail; } printf (\(dqdecrypted value: %s\en\(dq, decrypted.value.v_utf8.str); exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); bson_destroy (schema); bson_destroy (create_cmd); bson_destroy (create_cmd_opts); mongoc_write_concern_destroy (wc); mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_value_destroy (&encrypted_field); mongoc_client_encryption_encrypt_opts_destroy (encrypt_opts); bson_value_destroy (&decrypted); bson_value_destroy (&datakey_id); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Explicit Encryption with Automatic Decryption .sp Although automatic encryption requires MongoDB 4.2 enterprise or a MongoDB 4.2 Atlas cluster, automatic decryption is supported for all users. To configure automatic decryption without automatic encryption set bypass_auto_encryption=True in \fI\%mongoc_auto_encryption_opts_t\fP: .sp client\-side\-encryption\-auto\-decryption.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* This example demonstrates how to set up automatic decryption without * automatic encryption using the community version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_t *schema = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; mongoc_collection_t *keyvault_coll = NULL; bson_t *to_insert = NULL; bson_t *create_cmd = NULL; bson_t *create_cmd_opts = NULL; mongoc_write_concern_t *wc = NULL; mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; char *keyaltnames[] = {\(dqmongoc_encryption_example_4\(dq}; bson_value_t datakey_id = {0}; bson_value_t encrypted_field = {0}; bson_value_t to_encrypt = {0}; mongoc_client_encryption_encrypt_opts_t *encrypt_opts = NULL; bson_value_t decrypted = {0}; mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL; mongoc_client_t *unencrypted_client = NULL; mongoc_collection_t *unencrypted_coll = NULL; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); auto_encryption_opts = mongoc_auto_encryption_opts_new (); mongoc_auto_encryption_opts_set_keyvault_namespace ( auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts, kms_providers); /* Setting bypass_auto_encryption to true disables automatic encryption but * keeps the automatic decryption behavior. bypass_auto_encryption will also * disable spawning mongocryptd */ mongoc_auto_encryption_opts_set_bypass_auto_encryption (auto_encryption_opts, true); /* Once bypass_auto_encryption is set, community users can enable auto * encryption on the client. This will, in fact, only perform automatic * decryption. */ ret = mongoc_client_enable_auto_encryption ( client, auto_encryption_opts, &error); if (!ret) { goto fail; } /* Now that automatic decryption is on, we can test it by inserting a * document with an explicitly encrypted value into the collection. When we * look up the document later, it should be automatically decrypted for us. */ coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); /* Set up the key vault for this example. */ keyvault_coll = mongoc_client_get_collection (client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts (keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace ( client_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); /* The key vault client is used for reading to/from the key vault. This can * be the same mongoc_client_t used by the application. */ mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, &error); if (!client_encryption) { goto fail; } /* Create a new data key for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames ( datakey_opts, keyaltnames, 1); ret = mongoc_client_encryption_create_datakey ( client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, &error); if (!ret) { goto fail; } /* Explicitly encrypt a field. */ encrypt_opts = mongoc_client_encryption_encrypt_opts_new (); mongoc_client_encryption_encrypt_opts_set_algorithm ( encrypt_opts, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC); mongoc_client_encryption_encrypt_opts_set_keyaltname ( encrypt_opts, \(dqmongoc_encryption_example_4\(dq); to_encrypt.value_type = BSON_TYPE_UTF8; to_encrypt.value.v_utf8.str = \(dq123456789\(dq; const size_t len = strlen (to_encrypt.value.v_utf8.str); BSON_ASSERT (bson_in_range_unsigned (uint32_t, len)); to_encrypt.value.v_utf8.len = (uint32_t) len; ret = mongoc_client_encryption_encrypt ( client_encryption, &to_encrypt, encrypt_opts, &encrypted_field, &error); if (!ret) { goto fail; } to_insert = bson_new (); BSON_APPEND_VALUE (to_insert, \(dqencryptedField\(dq, &encrypted_field); ret = mongoc_collection_insert_one ( coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } /* When we retrieve the document, any encrypted fields will get automatically * decrypted by the driver. */ printf (\(dqdecrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); unencrypted_client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); unencrypted_coll = mongoc_client_get_collection ( unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL); printf (\(dqencrypted document: \(dq); if (!print_one_document (unencrypted_coll, &error)) { goto fail; } printf (\(dq\en\(dq); exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); bson_destroy (schema); bson_destroy (create_cmd); bson_destroy (create_cmd_opts); mongoc_write_concern_destroy (wc); mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_value_destroy (&encrypted_field); mongoc_client_encryption_encrypt_opts_destroy (encrypt_opts); bson_value_destroy (&decrypted); bson_value_destroy (&datakey_id); mongoc_collection_destroy (unencrypted_coll); mongoc_client_destroy (unencrypted_client); mongoc_auto_encryption_opts_destroy (auto_encryption_opts); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Queryable Encryption .sp Using Queryable Encryption requires MongoDB Server 7.0 or higher. .sp See the MongoDB Manual for \fI\%Queryable Encryption\fP for more information about the feature. .sp API related to the \(dqrangePreview\(dq algorithm is still experimental and subject to breaking changes! .SS Queryable Encryption in older MongoDB Server versions .sp MongoDB Server 6.0 introduced Queryable Encryption as a Public Technical Preview. MongoDB Server 7.0 includes backwards breaking changes to the Queryable Encryption protocol. .sp The backwards breaking changes are applied in the client protocol in libmongocrypt 1.8.0. libmongoc 1.24.0 requires libmongocrypt 1.8.0 or newer. libmongoc 1.24.0 no longer supports Queryable Encryption in MongoDB Server <7.0. Using Queryable Encryption libmongoc 1.24.0 and higher requires MongoDB Server >=7.0. .sp Using Queryable Encryption with libmongocrypt<1.8.0 on a MongoDB Server>=7.0, or using libmongocrypt>=1.8.0 on a MongoDB Server<6.0 will result in a server error when using the incompatible protocol. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf The MongoDB Manual for \fI\%Queryable Encryption\fP .fi .sp .UNINDENT .UNINDENT .SS Installation .sp Using In\-Use Encryption in the C driver requires the dependency libmongocrypt. See the MongoDB Manual for \fI\%libmongocrypt installation instructions\fP\&. .sp Once libmongocrypt is installed, configure the C driver with \fB\-DENABLE_CLIENT_SIDE_ENCRYPTION=ON\fP to require In\-Use Encryption be enabled. .INDENT 0.0 .INDENT 3.5 .sp .EX $ cd mongo\-c\-driver $ mkdir cmake\-build && cd cmake\-build $ cmake \-DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF \-DENABLE_CLIENT_SIDE_ENCRYPTION=ON .. $ cmake \-\-build . \-\-target install .EE .UNINDENT .UNINDENT .SS API .sp \fI\%mongoc_client_encryption_t\fP is used for explicit encryption and key management. \fI\%mongoc_client_enable_auto_encryption()\fP and \fI\%mongoc_client_pool_enable_auto_encryption()\fP is used to enable automatic encryption. .sp The Queryable Encryption and CSFLE features share much of the same API with some exceptions. .INDENT 0.0 .IP \(bu 2 The supported algorithms documented in \fI\%mongoc_client_encryption_encrypt_opts_set_algorithm()\fP do not apply to both features. .IP \(bu 2 \fI\%mongoc_auto_encryption_opts_set_encrypted_fields_map()\fP only applies to Queryable Encryption. .IP \(bu 2 \fI\%mongoc_auto_encryption_opts_set_schema_map()\fP only applies to CSFLE. .UNINDENT .SS Query Analysis .sp To support the automatic encryption feature, one of the following dependencies are required: .INDENT 0.0 .IP \(bu 2 The \fBmongocryptd\fP executable. See the MongoDB Manual documentation: \fI\%Install and Configure mongocryptd\fP .IP \(bu 2 The \fBcrypt_shared\fP library. See the MongoDB Manual documentation: \fI\%Automatic Encryption Shared Library\fP .UNINDENT .sp A \fI\%mongoc_client_t\fP or \fI\%mongoc_client_pool_t\fP configured with auto encryption will automatically try to load the \fBcrypt_shared\fP library. If loading the \fBcrypt_shared\fP library fails, the \fI\%mongoc_client_t\fP or \fI\%mongoc_client_pool_t\fP will try to spawn the \fBmongocryptd\fP process from the application\(aqs \fBPATH\fP\&. To configure use of \fBcrypt_shared\fP and \fBmongocryptd\fP see \fI\%mongoc_auto_encryption_opts_set_extra()\fP\&. .SS API Reference .SS Initialization and cleanup .SS Synopsis .sp Initialize the MongoDB C Driver by calling \fI\%mongoc_init()\fP exactly once at the beginning of your program. It is responsible for initializing global state such as process counters, SSL, and threading primitives. .sp Exception to this is \fBmongoc_log_set_handler()\fP, which should be called before \fBmongoc_init()\fP or some log traces would not use your log handling function. See \fI\%Custom Log Handlers\fP for a detailed example. .sp Call \fI\%mongoc_cleanup()\fP exactly once at the end of your program to release all memory and other resources allocated by the driver. You must not call any other MongoDB C Driver functions after \fI\%mongoc_cleanup()\fP\&. Note that \fI\%mongoc_init()\fP does \fBnot\fP reinitialize the driver after \fI\%mongoc_cleanup()\fP\&. .SS Deprecated feature: automatic initialization and cleanup .sp On some platforms the driver can automatically call \fI\%mongoc_init()\fP before \fBmain\fP, and call \fI\%mongoc_cleanup()\fP as the process exits. This is problematic in situations where related libraries also execute cleanup code on shutdown, and it creates inconsistent rules across platforms. Therefore the automatic initialization and cleanup feature is deprecated, and will be dropped in version 2.0. Meanwhile, for backward compatibility, the feature is \fIenabled\fP by default on platforms where it is available. .sp For portable, future\-proof code, always call \fI\%mongoc_init()\fP and \fI\%mongoc_cleanup()\fP yourself, and configure the driver like: .INDENT 0.0 .INDENT 3.5 .sp .EX cmake \-DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF .EE .UNINDENT .UNINDENT .SS Logging .sp MongoDB C driver Logging Abstraction .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_LOG_LEVEL_ERROR, MONGOC_LOG_LEVEL_CRITICAL, MONGOC_LOG_LEVEL_WARNING, MONGOC_LOG_LEVEL_MESSAGE, MONGOC_LOG_LEVEL_INFO, MONGOC_LOG_LEVEL_DEBUG, MONGOC_LOG_LEVEL_TRACE, } mongoc_log_level_t; #define MONGOC_ERROR(...) #define MONGOC_CRITICAL(...) #define MONGOC_WARNING(...) #define MONGOC_MESSAGE(...) #define MONGOC_INFO(...) #define MONGOC_DEBUG(...) typedef void (*mongoc_log_func_t) (mongoc_log_level_t log_level, const char *log_domain, const char *message, void *user_data); void mongoc_log_set_handler (mongoc_log_func_t log_func, void *user_data); void mongoc_log (mongoc_log_level_t log_level, const char *log_domain, const char *format, ...) BSON_GNUC_PRINTF (3, 4); const char * mongoc_log_level_str (mongoc_log_level_t log_level); void mongoc_log_default_handler (mongoc_log_level_t log_level, const char *log_domain, const char *message, void *user_data); void mongoc_log_trace_enable (void); void mongoc_log_trace_disable (void); .EE .UNINDENT .UNINDENT .sp The MongoDB C driver comes with an abstraction for logging that you can use in your application, or integrate with an existing logging system. .SS Macros .sp To make logging a little less painful, various helper macros are provided. See the following example. .INDENT 0.0 .INDENT 3.5 .sp .EX #undef MONGOC_LOG_DOMAIN #define MONGOC_LOG_DOMAIN \(dqmy\-custom\-domain\(dq MONGOC_WARNING (\(dqAn error occurred: %s\(dq, strerror (errno)); .EE .UNINDENT .UNINDENT .SS Custom Log Handlers .INDENT 0.0 .TP .B The default log handler prints a timestamp and the log message to \fBstdout\fP, or to \fBstderr\fP for warnings, critical messages, and errors. You can override the handler with \fBmongoc_log_set_handler()\fP\&. Your handler function is called in a mutex for thread safety. .UNINDENT .sp For example, you could register a custom handler to suppress messages at INFO level and below: .INDENT 0.0 .INDENT 3.5 .sp .EX void my_logger (mongoc_log_level_t log_level, const char *log_domain, const char *message, void *user_data) { /* smaller values are more important */ if (log_level < MONGOC_LOG_LEVEL_INFO) { mongoc_log_default_handler (log_level, log_domain, message, user_data); } } int main (int argc, char *argv[]) { mongoc_log_set_handler (my_logger, NULL); mongoc_init (); /* ... your code ... */ mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp Note that in the example above \fBmongoc_log_set_handler()\fP is called before \fBmongoc_init()\fP\&. Otherwise, some log traces could not be processed by the log handler. .sp To restore the default handler: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_log_set_handler (mongoc_log_default_handler, NULL); .EE .UNINDENT .UNINDENT .SS Disable logging .sp To disable all logging, including warnings, critical messages and errors, provide an empty log handler: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_log_set_handler (NULL, NULL); .EE .UNINDENT .UNINDENT .SS Tracing .sp If compiling your own copy of the MongoDB C driver, consider configuring with \fB\-DENABLE_TRACING=ON\fP to enable function tracing and hex dumps of network packets to \fBSTDERR\fP and \fBSTDOUT\fP during development and debugging. .sp This is especially useful when debugging what may be going on internally in the driver. .sp Trace messages can be enabled and disabled by calling \fBmongoc_log_trace_enable()\fP and \fBmongoc_log_trace_disable()\fP .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Compiling the driver with \fB\-DENABLE_TRACING=ON\fP will affect its performance. Disabling tracing with \fBmongoc_log_trace_disable()\fP significantly reduces the overhead, but cannot remove it completely. .UNINDENT .UNINDENT « \fI\%libmongoc\fP .SS Error Reporting .SS Description .sp Many C Driver functions report errors by returning \fBfalse\fP or \-1 and filling out a \fI\%bson_error_t\fP structure with an error domain, error code, and message. Use \fBdomain\fP to determine which subsystem generated the error, and \fBcode\fP for the specific error. \fBmessage\fP is a human\-readable error description. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%Handling Errors in libbson\fP\&. .fi .sp .UNINDENT .UNINDENT .TS center; |l|l|l|. _ T{ Code T} T{ Description T} T{ T} _ T{ \fBMONGOC_ERROR_CLIENT\fP T} T{ \fBMONGOC_ERROR_CLIENT_TOO_BIG\fP T} T{ You tried to send a message larger than the server\(aqs max message size. T} _ T{ T} T{ \fBMONGOC_ERROR_CLIENT_AUTHENTICATE\fP T} T{ Wrong credentials, or failure sending or receiving authentication messages. T} _ T{ T} T{ \fBMONGOC_ERROR_CLIENT_NO_ACCEPTABLE_PEER\fP T} T{ You tried an TLS connection but the driver was not built with TLS. T} _ T{ T} T{ \fBMONGOC_ERROR_CLIENT_IN_EXHAUST\fP T} T{ You began iterating an exhaust cursor, then tried to begin another operation with the same \fI\%mongoc_client_t\fP\&. T} _ T{ T} T{ \fBMONGOC_ERROR_CLIENT_SESSION_FAILURE\fP T} T{ Failure related to creating or using a logical session. T} _ T{ T} T{ \fBMONGOC_ERROR_CLIENT_INVALID_ENCRYPTION_ARG\fP T} T{ Failure related to arguments passed when initializing In\-Use Encryption. T} _ T{ T} T{ \fBMONGOC_ERROR_CLIENT_INVALID_ENCRYPTION_STATE\fP T} T{ Failure related to In\-Use Encryption. T} _ T{ T} T{ \fBMONGOC_ERROR_CLIENT_INVALID_LOAD_BALANCER\fP T} T{ You attempted to connect to a MongoDB server behind a load balancer, but the server does not advertize load balanced support. T} _ T{ \fBMONGOC_ERROR_STREAM\fP T} T{ \fBMONGOC_ERROR_STREAM_NAME_RESOLUTION\fP T} T{ DNS failure. T} _ T{ T} T{ \fBMONGOC_ERROR_STREAM_SOCKET\fP T} T{ Timeout communicating with server, or connection closed. T} _ T{ T} T{ \fBMONGOC_ERROR_STREAM_CONNECT\fP T} T{ Failed to connect to server. T} _ T{ \fBMONGOC_ERROR_PROTOCOL\fP T} T{ \fBMONGOC_ERROR_PROTOCOL_INVALID_REPLY\fP T} T{ Corrupt response from server. T} _ T{ T} T{ \fBMONGOC_ERROR_PROTOCOL_BAD_WIRE_VERSION\fP T} T{ The server version is too old or too new to communicate with the driver. T} _ T{ \fBMONGOC_ERROR_CURSOR\fP T} T{ \fBMONGOC_ERROR_CURSOR_INVALID_CURSOR\fP T} T{ You passed bad arguments to \fI\%mongoc_collection_find_with_opts()\fP, or you called \fI\%mongoc_cursor_next()\fP on a completed or failed cursor, or the cursor timed out on the server. T} _ T{ T} T{ \fBMONGOC_ERROR_CHANGE_STREAM_NO_RESUME_TOKEN\fP T} T{ A resume token was not returned in a document found with \fI\%mongoc_change_stream_next()\fP T} _ T{ \fBMONGOC_ERROR_QUERY\fP T} T{ \fBMONGOC_ERROR_QUERY_FAILURE\fP T} T{ \fI\%Error API Version 1\fP: Server error from command or query. The server error message is in \fBmessage\fP\&. T} _ T{ \fBMONGOC_ERROR_SERVER\fP T} T{ \fBMONGOC_ERROR_QUERY_FAILURE\fP T} T{ \fI\%Error API Version 2\fP: Server error from command or query. The server error message is in \fBmessage\fP\&. T} _ T{ \fBMONGOC_ERROR_SASL\fP T} T{ A SASL error code. T} T{ \fBman sasl_errors\fP for a list of codes. T} _ T{ \fBMONGOC_ERROR_BSON\fP T} T{ \fBMONGOC_ERROR_BSON_INVALID\fP T} T{ You passed an invalid or oversized BSON document as a parameter, or called \fI\%mongoc_collection_create_index()\fP with invalid keys, or the server reply was corrupt. T} _ T{ \fBMONGOC_ERROR_NAMESPACE\fP T} T{ \fBMONGOC_ERROR_NAMESPACE_INVALID\fP T} T{ You tried to create a collection with an invalid name. T} _ T{ \fBMONGOC_ERROR_COMMAND\fP T} T{ \fBMONGOC_ERROR_COMMAND_INVALID_ARG\fP T} T{ Many functions set this error code when passed bad parameters. Print the error message for details. T} _ T{ T} T{ \fBMONGOC_ERROR_PROTOCOL_BAD_WIRE_VERSION\fP T} T{ You tried to use a command option the server does not support. T} _ T{ T} T{ \fBMONGOC_ERROR_DUPLICATE_KEY\fP T} T{ An insert or update failed because because of a duplicate \fB_id\fP or other unique\-index violation. T} _ T{ T} T{ \fBMONGOC_ERROR_MAX_TIME_MS_EXPIRED\fP T} T{ The operation failed because maxTimeMS expired. T} _ T{ T} T{ \fBMONGOC_ERROR_SERVER_SELECTION_INVALID_ID\fP T} T{ The \fBserverId\fP option for an operation conflicts with the pinned server for that operation\(aqs client session (denoted by the \fBsessionId\fP option). T} _ T{ \fBMONGOC_ERROR_COMMAND\fP T} T{ \fI\%Error code from server\fP\&. T} T{ \fI\%Error API Version 1\fP: Server error from a command. The server error message is in \fBmessage\fP\&. T} _ T{ \fBMONGOC_ERROR_SERVER\fP T} T{ \fI\%Error code from server\fP\&. T} T{ \fI\%Error API Version 2\fP: Server error from a command. The server error message is in \fBmessage\fP\&. T} _ T{ \fBMONGOC_ERROR_COLLECTION\fP T} T{ \fBMONGOC_ERROR_COLLECTION_INSERT_FAILED\fP, \fBMONGOC_ERROR_COLLECTION_UPDATE_FAILED\fP, \fBMONGOC_ERROR_COLLECTION_DELETE_FAILED\fP\&. T} T{ Invalid or empty input to \fI\%mongoc_collection_insert_one()\fP, \fI\%mongoc_collection_insert_bulk()\fP, \fI\%mongoc_collection_update_one()\fP, \fI\%mongoc_collection_update_many()\fP, \fI\%mongoc_collection_replace_one()\fP, \fI\%mongoc_collection_delete_one()\fP, or \fI\%mongoc_collection_delete_many()\fP\&. T} _ T{ \fBMONGOC_ERROR_COLLECTION\fP T} T{ \fI\%Error code from server\fP\&. T} T{ \fI\%Error API Version 1\fP: Server error from \fI\%mongoc_collection_insert_one()\fP, \fI\%mongoc_collection_insert_bulk()\fP, \fI\%mongoc_collection_update_one()\fP, \fI\%mongoc_collection_update_many()\fP, \fI\%mongoc_collection_replace_one()\fP, T} _ T{ \fBMONGOC_ERROR_SERVER\fP T} T{ \fI\%Error code from server\fP\&. T} T{ \fI\%Error API Version 2\fP: Server error from \fI\%mongoc_collection_insert_one()\fP, \fI\%mongoc_collection_insert_bulk()\fP, \fI\%mongoc_collection_update_one()\fP, \fI\%mongoc_collection_update_many()\fP, \fI\%mongoc_collection_replace_one()\fP, T} _ T{ \fBMONGOC_ERROR_GRIDFS\fP T} T{ \fBMONGOC_ERROR_GRIDFS_CHUNK_MISSING\fP T} T{ The GridFS file is missing a document in its \fBchunks\fP collection. T} _ T{ T} T{ \fBMONGOC_ERROR_GRIDFS_CORRUPT\fP T} T{ A data inconsistency was detected in GridFS. T} _ T{ T} T{ \fBMONGOC_ERROR_GRIDFS_INVALID_FILENAME\fP T} T{ You passed a NULL filename to \fI\%mongoc_gridfs_remove_by_filename()\fP\&. T} _ T{ T} T{ \fBMONGOC_ERROR_GRIDFS_PROTOCOL_ERROR\fP T} T{ You called \fI\%mongoc_gridfs_file_set_id()\fP after \fI\%mongoc_gridfs_file_save()\fP, or tried to write on a closed GridFS stream. T} _ T{ T} T{ \fBMONGOC_ERROR_GRIDFS_BUCKET_FILE_NOT_FOUND\fP T} T{ A GridFS file is missing from \fBfiles\fP collection. T} _ T{ T} T{ \fBMONGOC_ERROR_GRIDFS_BUCKET_STREAM\fP T} T{ An error occurred on a stream created from a GridFS operation like \fI\%mongoc_gridfs_bucket_upload_from_stream()\fP\&. T} _ T{ \fBMONGOC_ERROR_SCRAM\fP T} T{ \fBMONGOC_ERROR_SCRAM_PROTOCOL_ERROR\fP T} T{ Failure in SCRAM\-SHA\-1 or SCRAM\-SHA\-256 authentication. T} _ T{ \fBMONGOC_ERROR_SERVER_SELECTION\fP T} T{ \fBMONGOC_ERROR_SERVER_SELECTION_FAILURE\fP T} T{ No replica set member or mongos is available, or none matches your \fI\%read preference\fP, or you supplied an invalid \fI\%mongoc_read_prefs_t\fP\&. T} _ T{ \fBMONGOC_ERROR_WRITE_CONCERN\fP T} T{ \fI\%Error code from server\fP\&. T} T{ There was a \fI\%write concern\fP error or \fI\%timeout\fP from the server. T} _ T{ \fBMONGOC_ERROR_TRANSACTION\fP T} T{ \fBMONGOC_ERROR_TRANSACTION_INVALID\fP T} T{ You attempted to start a transaction when one is already in progress, or commit or abort when there is no transaction. T} _ T{ \fBMONGOC_ERROR_CLIENT_SIDE_ENCRYPTION\fP T} T{ Error code produced by libmongocrypt. T} T{ An error occurred in the library responsible for In\-Use Encryption T} _ T{ \fBMONGOC_ERROR_AZURE\fP T} T{ \fBMONGOC_ERROR_KMS_SERVER_HTTP\fP T} T{ An Azure HTTP service responded with an error status T} _ T{ T} T{ \fBMONGOC_ERROR_KMS_SERVER_BAD_JSON\fP T} T{ An Azure service responded with invalid JSON data T} _ T{ \fBMONGOC_ERROR_GCP\fP T} T{ \fBMONGOC_ERROR_KMS_SERVER_HTTP\fP T} T{ A GCP HTTP service responded with an error status T} _ T{ T} T{ \fBMONGOC_ERROR_KMS_SERVER_BAD_JSON\fP T} T{ A GCP service responded with invalid JSON data T} _ .TE .SS Error Labels .sp In some cases your application must make decisions based on what category of error the driver has returned, but these categories do not correspond perfectly to an error domain or code. In such cases, error \fIlabels\fP provide a reliable way to determine how your application should respond to an error. .sp Any C Driver function that has a \fI\%bson_t\fP out\-parameter named \fBreply\fP may include error labels to the reply, in the form of a BSON field named \(dqerrorLabels\(dq containing an array of strings: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqerrorLabels\(dq: [ \(dqTransientTransactionError\(dq ] } .EE .UNINDENT .UNINDENT .sp Use \fI\%mongoc_error_has_label()\fP to test if a reply contains a specific label. See \fI\%mongoc_client_session_start_transaction()\fP for example code that demonstrates the use of error labels in application logic. .sp The following error labels are currently defined. Future versions of MongoDB may introduce new labels. .SS TransientTransactionError .sp Within a multi\-document transaction, certain errors can leave the transaction in an unknown or aborted state. These include write conflicts, primary stepdowns, and network errors. In response, the application should abort the transaction and try the same sequence of operations again in a new transaction. .SS UnknownTransactionCommitResult .sp When \fI\%mongoc_client_session_commit_transaction()\fP encounters a network error or certain server errors, it is not known whether the transaction was committed. Applications should attempt to commit the transaction again until: the commit succeeds, the commit fails with an error \fInot\fP labeled \(dqUnknownTransactionCommitResult\(dq, or the application chooses to give up. .SS Setting the Error API Version .sp The driver\(aqs error reporting began with a design flaw: when the error \fIdomain\fP is \fBMONGOC_ERROR_COLLECTION\fP, \fBMONGOC_ERROR_QUERY\fP, or \fBMONGOC_ERROR_COMMAND\fP, the error \fIcode\fP might originate from the server or the driver. An application cannot always know where an error originated, and therefore cannot tell what the code means. .sp For example, if \fI\%mongoc_collection_update_one()\fP sets the error\(aqs domain to \fBMONGOC_ERROR_COLLECTION\fP and its code to 24, the application cannot know whether 24 is the generic driver error code \fBMONGOC_ERROR_COLLECTION_UPDATE_FAILED\fP or the specific server error code \(dqLockTimeout\(dq. .sp To fix this flaw while preserving backward compatibility, the C Driver 1.4 introduces \(dqError API Versions\(dq. Version 1, the default Error API Version, maintains the flawed behavior. Version 2 adds a new error domain, \fBMONGOC_ERROR_SERVER\fP\&. In Version 2, error codes originating on the server always have error domain \fBMONGOC_ERROR_SERVER\fP or \fBMONGOC_ERROR_WRITE_CONCERN\fP\&. When the driver uses Version 2 the application can always determine the origin and meaning of error codes. New applications should use Version 2, and existing applications should be updated to use Version 2 as well. .TS center; |l|l|l|. _ T{ Error Source T} T{ API Version 1 T} T{ API Version 2 T} _ T{ \fI\%mongoc_cursor_error()\fP T} T{ \fBMONGOC_ERROR_QUERY\fP T} T{ \fBMONGOC_ERROR_SERVER\fP T} _ T{ \fI\%mongoc_client_command_with_opts()\fP, \fI\%mongoc_database_command_with_opts()\fP, and other command functions T} T{ \fBMONGOC_ERROR_QUERY\fP T} T{ \fBMONGOC_ERROR_SERVER\fP T} _ T{ \fI\%mongoc_collection_count_with_opts()\fP \fI\%mongoc_client_get_database_names_with_opts()\fP, and other command helper functions T} T{ \fBMONGOC_ERROR_QUERY\fP T} T{ \fBMONGOC_ERROR_SERVER\fP T} _ T{ \fI\%mongoc_collection_insert_one()\fP \fI\%mongoc_collection_insert_bulk()\fP \fI\%mongoc_collection_update_one()\fP \fI\%mongoc_collection_update_many()\fP \fI\%mongoc_collection_replace_one()\fP \fI\%mongoc_collection_delete_one()\fP \fI\%mongoc_collection_delete_many()\fP T} T{ \fBMONGOC_ERROR_COMMAND\fP T} T{ \fBMONGOC_ERROR_SERVER\fP T} _ T{ \fI\%mongoc_bulk_operation_execute()\fP T} T{ \fBMONGOC_ERROR_COMMAND\fP T} T{ \fBMONGOC_ERROR_SERVER\fP T} _ T{ Write\-concern timeout T} T{ \fBMONGOC_ERROR_WRITE_CONCERN\fP T} T{ \fBMONGOC_ERROR_WRITE_CONCERN\fP T} _ .TE .sp The Error API Versions are defined with \fBMONGOC_ERROR_API_VERSION_LEGACY\fP and \fBMONGOC_ERROR_API_VERSION_2\fP\&. Set the version with \fI\%mongoc_client_set_error_api()\fP or \fI\%mongoc_client_pool_set_error_api()\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%MongoDB Server Error Codes\fP .fi .sp .UNINDENT .UNINDENT .SS Object Lifecycle .sp This page documents the order of creation and destruction for libmongoc\(aqs main struct types. .SS Clients and pools .sp Call \fI\%mongoc_init()\fP once, before calling any other libmongoc functions, and call \fI\%mongoc_cleanup()\fP once before your program exits. .sp A program that uses libmongoc from multiple threads should create a \fI\%mongoc_client_pool_t\fP with \fI\%mongoc_client_pool_new()\fP\&. Each thread acquires a \fI\%mongoc_client_t\fP from the pool with \fI\%mongoc_client_pool_pop()\fP and returns it with \fI\%mongoc_client_pool_push()\fP when the thread is finished using it. To destroy the pool, first return all clients, then call \fI\%mongoc_client_pool_destroy()\fP\&. .sp If your program uses libmongoc from only one thread, create a \fI\%mongoc_client_t\fP directly with \fI\%mongoc_client_new()\fP or \fI\%mongoc_client_new_from_uri()\fP\&. Destroy it with \fI\%mongoc_client_destroy()\fP\&. .SS Databases, collections, and related objects .sp You can create a \fI\%mongoc_database_t\fP or \fI\%mongoc_collection_t\fP from a \fI\%mongoc_client_t\fP, and create a \fI\%mongoc_cursor_t\fP or \fI\%mongoc_bulk_operation_t\fP from a \fI\%mongoc_collection_t\fP\&. .sp Each of these objects must be destroyed before the client they were created from, but their lifetimes are otherwise independent. .SS GridFS objects .sp You can create a \fI\%mongoc_gridfs_t\fP from a \fI\%mongoc_client_t\fP, create a \fI\%mongoc_gridfs_file_t\fP or \fI\%mongoc_gridfs_file_list_t\fP from a \fI\%mongoc_gridfs_t\fP, create a \fI\%mongoc_gridfs_file_t\fP from a \fI\%mongoc_gridfs_file_list_t\fP, and create a \fI\%mongoc_stream_t\fP from a \fI\%mongoc_gridfs_file_t\fP\&. .sp Each of these objects depends on the object it was created from. Always destroy GridFS objects in the reverse of the order they were created. The sole exception is that a \fI\%mongoc_gridfs_file_t\fP need not be destroyed before the \fI\%mongoc_gridfs_file_list_t\fP it was created from. .SS GridFS bucket objects .sp Create \fI\%mongoc_gridfs_bucket_t\fP with a \fI\%mongoc_database_t\fP derived from a \fI\%mongoc_client_t\fP\&. The \fI\%mongoc_database_t\fP is independent from the \fI\%mongoc_gridfs_bucket_t\fP\&. But the \fI\%mongoc_client_t\fP must outlive the \fI\%mongoc_gridfs_bucket_t\fP\&. .sp A \fI\%mongoc_stream_t\fP may be created from the \fI\%mongoc_gridfs_bucket_t\fP\&. The \fI\%mongoc_gridfs_bucket_t\fP must outlive the \fI\%mongoc_stream_t\fP\&. .SS Sessions .sp Start a session with \fI\%mongoc_client_start_session()\fP, use the session for a sequence of operations and multi\-document transactions, then free it with \fI\%mongoc_client_session_destroy()\fP\&. Any \fI\%mongoc_cursor_t\fP or \fI\%mongoc_change_stream_t\fP using a session must be destroyed before the session, and a session must be destroyed before the \fI\%mongoc_client_t\fP it came from. .sp By default, sessions are \fI\%causally consistent\fP\&. To disable causal consistency, before starting a session create a \fI\%mongoc_session_opt_t\fP with \fI\%mongoc_session_opts_new()\fP and call \fI\%mongoc_session_opts_set_causal_consistency()\fP, then free the struct with \fI\%mongoc_session_opts_destroy()\fP\&. .sp Unacknowledged writes are prohibited with sessions. .sp A \fI\%mongoc_client_session_t\fP must be used by only one thread at a time. Due to session pooling, \fI\%mongoc_client_start_session()\fP may return a session that has been idle for some time and is about to be closed after its idle timeout. Use the session within one minute of acquiring it to refresh the session and avoid a timeout. .SS Client Side Encryption .sp When configuring a \fI\%mongoc_client_t\fP for automatic encryption via \fI\%mongoc_client_enable_auto_encryption()\fP, if a separate key vault client is set in the options (via \fI\%mongoc_auto_encryption_opts_set_keyvault_client()\fP) the key vault client must outlive the encrypted client. .sp When configuring a \fI\%mongoc_client_pool_t\fP for automatic encryption via \fI\%mongoc_client_pool_enable_auto_encryption()\fP, if a separate key vault client pool is set in the options (via \fI\%mongoc_auto_encryption_opts_set_keyvault_client_pool()\fP) the key vault client pool must outlive the encrypted client pool. .sp When creating a \fI\%mongoc_client_encryption_t\fP, the configured key vault client (set via \fI\%mongoc_client_encryption_opts_set_keyvault_client()\fP) must outlive the \fI\%mongoc_client_encryption_t\fP\&. .SS GridFS .sp The C driver includes two APIs for GridFS. .sp The older API consists of \fI\%mongoc_gridfs_t\fP and its derivatives. It contains deprecated API, does not support read preferences, and is not recommended in new applications. It does not conform to the \fI\%MongoDB GridFS specification\fP\&. .sp The newer API consists of \fI\%mongoc_gridfs_bucket_t\fP and allows uploading/downloading through derived \fI\%mongoc_stream_t\fP objects. It conforms to the \fI\%MongoDB GridFS specification\fP\&. .sp There is not always a straightforward upgrade path from an application built with \fI\%mongoc_gridfs_t\fP to \fI\%mongoc_gridfs_bucket_t\fP (e.g. a \fI\%mongoc_gridfs_file_t\fP provides functions to seek but \fI\%mongoc_stream_t\fP does not). But users are encouraged to upgrade when possible. .SS mongoc_auto_encryption_opts_t .sp Options for enabling automatic encryption and decryption for \fI\%In\-Use Encryption\fP\&. .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_auto_encryption_opts_t mongoc_auto_encryption_opts_t; .EE .UNINDENT .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%In\-Use Encryption\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_bulk_operation_t .sp Bulk Write Operations .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_bulk_operation_t mongoc_bulk_operation_t; .EE .UNINDENT .UNINDENT .sp The opaque type \fBmongoc_bulk_operation_t\fP provides an abstraction for submitting multiple write operations as a single batch. .sp After adding all of the write operations to the \fBmongoc_bulk_operation_t\fP, call \fI\%mongoc_bulk_operation_execute()\fP to execute the operation. .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 It is only valid to call \fI\%mongoc_bulk_operation_execute()\fP once. The \fBmongoc_bulk_operation_t\fP must be destroyed afterwards. .UNINDENT .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%Bulk Write Operations\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_change_stream_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_change_stream_t mongoc_change_stream_t; .EE .UNINDENT .UNINDENT .sp \fI\%mongoc_change_stream_t\fP is a handle to a change stream. A collection change stream can be obtained using \fI\%mongoc_collection_watch()\fP\&. .sp It is recommended to use a \fI\%mongoc_change_stream_t\fP and its functions instead of a raw aggregation with a \fB$changeStream\fP stage. For more information see the \fI\%MongoDB Manual Entry on Change Streams\fP\&. .SS Example .sp example\-collection\-watch.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include int main (void) { bson_t empty = BSON_INITIALIZER; const bson_t *doc; bson_t *to_insert = BCON_NEW (\(dqx\(dq, BCON_INT32 (1)); const bson_t *err_doc; bson_error_t error; const char *uri_string; mongoc_uri_t *uri; mongoc_client_t *client; mongoc_collection_t *coll; mongoc_change_stream_t *stream; mongoc_write_concern_t *wc = mongoc_write_concern_new (); bson_t opts = BSON_INITIALIZER; bool r; mongoc_init (); uri_string = \(dqmongodb://\(dq \(dqlocalhost:27017,localhost:27018,localhost:\(dq \(dq27019/db?replicaSet=rs0\(dq; uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } coll = mongoc_client_get_collection (client, \(dqdb\(dq, \(dqcoll\(dq); stream = mongoc_collection_watch (coll, &empty, NULL); mongoc_write_concern_set_wmajority (wc, 10000); mongoc_write_concern_append (wc, &opts); r = mongoc_collection_insert_one (coll, to_insert, &opts, NULL, &error); if (!r) { fprintf (stderr, \(dqError: %s\en\(dq, error.message); return EXIT_FAILURE; } while (mongoc_change_stream_next (stream, &doc)) { char *as_json = bson_as_relaxed_extended_json (doc, NULL); fprintf (stderr, \(dqGot document: %s\en\(dq, as_json); bson_free (as_json); } if (mongoc_change_stream_error_document (stream, &error, &err_doc)) { if (!bson_empty (err_doc)) { fprintf (stderr, \(dqServer Error: %s\en\(dq, bson_as_relaxed_extended_json (err_doc, NULL)); } else { fprintf (stderr, \(dqClient Error: %s\en\(dq, error.message); } return EXIT_FAILURE; } bson_destroy (to_insert); mongoc_write_concern_destroy (wc); bson_destroy (&opts); mongoc_change_stream_destroy (stream); mongoc_collection_destroy (coll); mongoc_uri_destroy (uri); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .SS Starting and Resuming .sp All \fBwatch\fP functions accept several options to indicate where a change stream should start returning changes from: \fBresumeAfter\fP, \fBstartAfter\fP, and \fBstartAtOperationTime\fP\&. .sp All changes returned by \fI\%mongoc_change_stream_next()\fP include a resume token in the \fB_id\fP field. MongoDB 4.2 also includes an additional resume token in each \(dqaggregate\(dq and \(dqgetMore\(dq command response, which points to the end of that response\(aqs batch. The current token is automatically cached by libmongoc. In the event of an error, libmongoc attempts to recreate the change stream starting where it left off by passing the cached resume token. libmongoc only attempts to resume once, but client applications can access the cached resume token with \fI\%mongoc_change_stream_get_resume_token()\fP and use it for their own resume logic by passing it as either the \fBresumeAfter\fP or \fBstartAfter\fP option. .sp Additionally, change streams can start returning changes at an operation time by using the \fBstartAtOperationTime\fP field. This can be the timestamp returned in the \fBoperationTime\fP field of a command reply. .sp \fBresumeAfter\fP, \fBstartAfter\fP, and \fBstartAtOperationTime\fP are mutually exclusive options. Setting more than one will result in a server error. .sp The following example implements custom resuming logic, persisting the resume token in a file. .sp example\-resume.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include /* An example implementation of custom resume logic in a change stream. * example\-resume starts a client\-wide change stream and persists the resume * token in a file \(dqresume\-token.json\(dq. On restart, if \(dqresume\-token.json\(dq * exists, the change stream starts watching after the persisted resume token. * * This behavior allows a user to exit example\-resume, and restart it later * without missing any change events. */ #include static const char *RESUME_TOKEN_PATH = \(dqresume\-token.json\(dq; static bool _save_resume_token (const bson_t *doc) { FILE *file_stream; bson_iter_t iter; bson_t resume_token_doc; char *as_json = NULL; size_t as_json_len; ssize_t r, n_written; const bson_value_t *resume_token; if (!bson_iter_init_find (&iter, doc, \(dq_id\(dq)) { fprintf (stderr, \(dqreply does not contain operationTime.\(dq); return false; } resume_token = bson_iter_value (&iter); /* store the resume token in a document, { resumeAfter: } * which we can later append easily. */ file_stream = fopen (RESUME_TOKEN_PATH, \(dqw+\(dq); if (!file_stream) { fprintf (stderr, \(dqfailed to open %s for writing\en\(dq, RESUME_TOKEN_PATH); return false; } bson_init (&resume_token_doc); BSON_APPEND_VALUE (&resume_token_doc, \(dqresumeAfter\(dq, resume_token); as_json = bson_as_canonical_extended_json (&resume_token_doc, &as_json_len); bson_destroy (&resume_token_doc); n_written = 0; while (n_written < as_json_len) { r = fwrite ((void *) (as_json + n_written), sizeof (char), as_json_len \- n_written, file_stream); if (r == \-1) { fprintf (stderr, \(dqfailed to write to %s\en\(dq, RESUME_TOKEN_PATH); bson_free (as_json); fclose (file_stream); return false; } n_written += r; } bson_free (as_json); fclose (file_stream); return true; } bool _load_resume_token (bson_t *opts) { bson_error_t error; bson_json_reader_t *reader; bson_t doc; /* if the file does not exist, skip. */ if (\-1 == access (RESUME_TOKEN_PATH, R_OK)) { return true; } reader = bson_json_reader_new_from_file (RESUME_TOKEN_PATH, &error); if (!reader) { fprintf (stderr, \(dqfailed to open %s for reading: %s\en\(dq, RESUME_TOKEN_PATH, error.message); return false; } bson_init (&doc); if (\-1 == bson_json_reader_read (reader, &doc, &error)) { fprintf (stderr, \(dqfailed to read doc from %s\en\(dq, RESUME_TOKEN_PATH); bson_destroy (&doc); bson_json_reader_destroy (reader); return false; } printf (\(dqfound cached resume token in %s, resuming change stream.\en\(dq, RESUME_TOKEN_PATH); bson_concat (opts, &doc); bson_destroy (&doc); bson_json_reader_destroy (reader); return true; } int main (void) { int exit_code = EXIT_FAILURE; const char *uri_string; mongoc_uri_t *uri = NULL; bson_error_t error; mongoc_client_t *client = NULL; bson_t pipeline = BSON_INITIALIZER; bson_t opts = BSON_INITIALIZER; mongoc_change_stream_t *stream = NULL; const bson_t *doc; const int max_time = 30; /* max amount of time, in seconds, that mongoc_change_stream_next can block. */ mongoc_init (); uri_string = \(dqmongodb://localhost:27017/db?replicaSet=rs0\(dq; uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); goto cleanup; } client = mongoc_client_new_from_uri (uri); if (!client) { goto cleanup; } if (!_load_resume_token (&opts)) { goto cleanup; } BSON_APPEND_INT64 (&opts, \(dqmaxAwaitTimeMS\(dq, max_time * 1000); printf (\(dqlistening for changes on the client (max %d seconds).\en\(dq, max_time); stream = mongoc_client_watch (client, &pipeline, &opts); while (mongoc_change_stream_next (stream, &doc)) { char *as_json; as_json = bson_as_canonical_extended_json (doc, NULL); printf (\(dqchange received: %s\en\(dq, as_json); bson_free (as_json); if (!_save_resume_token (doc)) { goto cleanup; } } exit_code = EXIT_SUCCESS; cleanup: mongoc_uri_destroy (uri); bson_destroy (&pipeline); bson_destroy (&opts); mongoc_change_stream_destroy (stream); mongoc_client_destroy (client); mongoc_cleanup (); return exit_code; } .EE .UNINDENT .UNINDENT .sp The following example shows using \fBstartAtOperationTime\fP to synchronize a change stream with another operation. .sp example\-start\-at\-optime.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* An example of starting a change stream with startAtOperationTime. */ #include int main (void) { int exit_code = EXIT_FAILURE; const char *uri_string; mongoc_uri_t *uri = NULL; bson_error_t error; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; bson_t pipeline = BSON_INITIALIZER; bson_t opts = BSON_INITIALIZER; mongoc_change_stream_t *stream = NULL; bson_iter_t iter; const bson_t *doc; bson_value_t cached_operation_time = {0}; int i; bool r; mongoc_init (); uri_string = \(dqmongodb://localhost:27017/db?replicaSet=rs0\(dq; uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); goto cleanup; } client = mongoc_client_new_from_uri (uri); if (!client) { goto cleanup; } /* insert five documents. */ coll = mongoc_client_get_collection (client, \(dqdb\(dq, \(dqcoll\(dq); for (i = 0; i < 5; i++) { bson_t reply; bson_t *insert_cmd = BCON_NEW (\(dqinsert\(dq, \(dqcoll\(dq, \(dqdocuments\(dq, \(dq[\(dq, \(dq{\(dq, \(dqx\(dq, BCON_INT64 (i), \(dq}\(dq, \(dq]\(dq); r = mongoc_collection_write_command_with_opts ( coll, insert_cmd, NULL, &reply, &error); bson_destroy (insert_cmd); if (!r) { bson_destroy (&reply); fprintf (stderr, \(dqfailed to insert: %s\en\(dq, error.message); goto cleanup; } if (i == 0) { /* cache the operation time in the first reply. */ if (bson_iter_init_find (&iter, &reply, \(dqoperationTime\(dq)) { bson_value_copy (bson_iter_value (&iter), &cached_operation_time); } else { fprintf (stderr, \(dqreply does not contain operationTime.\(dq); bson_destroy (&reply); goto cleanup; } } bson_destroy (&reply); } /* start a change stream at the first returned operationTime. */ BSON_APPEND_VALUE (&opts, \(dqstartAtOperationTime\(dq, &cached_operation_time); stream = mongoc_collection_watch (coll, &pipeline, &opts); /* since the change stream started at the operation time of the first * insert, the five inserts are returned. */ printf (\(dqlistening for changes on db.coll:\en\(dq); while (mongoc_change_stream_next (stream, &doc)) { char *as_json; as_json = bson_as_canonical_extended_json (doc, NULL); printf (\(dqchange received: %s\en\(dq, as_json); bson_free (as_json); } exit_code = EXIT_SUCCESS; cleanup: mongoc_uri_destroy (uri); bson_destroy (&pipeline); bson_destroy (&opts); if (cached_operation_time.value_type) { bson_value_destroy (&cached_operation_time); } mongoc_change_stream_destroy (stream); mongoc_collection_destroy (coll); mongoc_client_destroy (client); mongoc_cleanup (); return exit_code; } .EE .UNINDENT .UNINDENT .SS mongoc_client_encryption_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_encryption_t mongoc_client_encryption_t; .EE .UNINDENT .UNINDENT .sp \fBmongoc_client_encryption_t\fP provides utility functions for \fI\%In\-Use Encryption\fP\&. .SS Thread Safety .sp \fI\%mongoc_client_encryption_t\fP is NOT thread\-safe and should only be used in the same thread as the \fI\%mongoc_client_t\fP that is configured via \fI\%mongoc_client_encryption_opts_set_keyvault_client()\fP\&. .SS Lifecycle .sp The key vault client, configured via \fI\%mongoc_client_encryption_opts_set_keyvault_client()\fP, must outlive the \fI\%mongoc_client_encryption_t\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_enable_auto_encryption()\fP .fi .sp .nf \fI\%mongoc_client_pool_enable_auto_encryption()\fP .fi .sp .nf \fI\%In\-Use Encryption\fP for libmongoc .fi .sp .nf The MongoDB Manual for \fI\%Client\-Side Field Level Encryption\fP .fi .sp .nf The MongoDB Manual for \fI\%Queryable Encryption\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_client_encryption_datakey_opts_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_encryption_datakey_opts_t mongoc_client_encryption_datakey_opts_t; .EE .UNINDENT .UNINDENT .sp Used to set options for \fI\%mongoc_client_encryption_create_datakey()\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_encryption_create_datakey()\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_client_encryption_rewrap_many_datakey_result_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_encryption_rewrap_many_datakey_result_t mongoc_client_encryption_rewrap_many_datakey_result_t; .EE .UNINDENT .UNINDENT .sp Used to access the result of \fI\%mongoc_client_encryption_rewrap_many_datakey()\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_encryption_rewrap_many_datakey()\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_client_encryption_encrypt_opts_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_encryption_encrypt_opts_t mongoc_client_encryption_encrypt_opts_t; .EE .UNINDENT .UNINDENT .sp Used to set options for \fI\%mongoc_client_encryption_encrypt()\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_encryption_encrypt()\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_client_encryption_encrypt_range_opts_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_encryption_encrypt_range_opts_t mongoc_client_encryption_encrypt_range_opts_t; .EE .UNINDENT .UNINDENT .sp \fBIMPORTANT:\fP .INDENT 0.0 .INDENT 3.5 The Range algorithm is experimental only and not intended for public use. It is subject to breaking changes. This API is part of the experimental \fI\%Queryable Encryption\fP API and may be subject to breaking changes in future releases. .UNINDENT .UNINDENT .sp New in version 1.24.0. .sp RangeOpts specifies index options for a Queryable Encryption field supporting \(dqrangePreview\(dq queries. Used to set options for \fI\%mongoc_client_encryption_encrypt()\fP\&. .sp The options min, max, sparsity, and range must match the values set in the encryptedFields of the destination collection. .sp For double and decimal128 fields, min/max/precision must all be set, or all be unset. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_encryption_encrypt()\fP \fI\%mongoc_client_encryption_encrypt_opts_t\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_client_encryption_opts_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_encryption_opts_t mongoc_client_encryption_opts_t; .EE .UNINDENT .UNINDENT .sp Used to set options for \fI\%mongoc_client_encryption_new()\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_encryption_new()\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_client_pool_t .sp A connection pool for multi\-threaded programs. See \fI\%Connection Pooling\fP\&. .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_pool_t mongoc_client_pool_t .EE .UNINDENT .UNINDENT .sp \fBmongoc_client_pool_t\fP is the basis for multi\-threading in the MongoDB C driver. Since \fI\%mongoc_client_t\fP structures are not thread\-safe, this structure is used to retrieve a new \fI\%mongoc_client_t\fP for a given thread. This structure \fIis thread\-safe\fP, except for its destructor method, \fI\%mongoc_client_pool_destroy()\fP, which \fIis not thread\-safe\fP and must only be called from one thread. .SS Example .sp example\-pool.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-pool.c \-o example\-pool $(pkg\-config \-\-cflags \-\-libs * libmongoc\-1.0) */ /* ./example\-pool [CONNECTION_STRING] */ #include #include #include static pthread_mutex_t mutex; static bool in_shutdown = false; static void * worker (void *data) { mongoc_client_pool_t *pool = data; mongoc_client_t *client; bson_t ping = BSON_INITIALIZER; bson_error_t error; bool r; BSON_APPEND_INT32 (&ping, \(dqping\(dq, 1); while (true) { client = mongoc_client_pool_pop (pool); /* Do something with client. If you are writing an HTTP server, you * probably only want to hold onto the client for the portion of the * request performing database queries. */ r = mongoc_client_command_simple ( client, \(dqadmin\(dq, &ping, NULL, NULL, &error); if (!r) { fprintf (stderr, \(dq%s\en\(dq, error.message); } mongoc_client_pool_push (pool, client); pthread_mutex_lock (&mutex); if (in_shutdown || !r) { pthread_mutex_unlock (&mutex); break; } pthread_mutex_unlock (&mutex); } bson_destroy (&ping); return NULL; } int main (int argc, char *argv[]) { const char *uri_string = \(dqmongodb://127.0.0.1/?appname=pool\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_client_pool_t *pool; pthread_t threads[10]; unsigned i; void *ret; pthread_mutex_init (&mutex, NULL); mongoc_init (); if (argc > 1) { uri_string = argv[1]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } pool = mongoc_client_pool_new (uri); mongoc_client_pool_set_error_api (pool, 2); for (i = 0; i < 10; i++) { pthread_create (&threads[i], NULL, worker, pool); } sleep (10); pthread_mutex_lock (&mutex); in_shutdown = true; pthread_mutex_unlock (&mutex); for (i = 0; i < 10; i++) { pthread_join (threads[i], &ret); } mongoc_client_pool_destroy (pool); mongoc_uri_destroy (uri); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .SS mongoc_client_session_t .sp Use a session for a sequence of operations, optionally with causal consistency. See \fI\%the MongoDB Manual Entry for Causal Consistency\fP\&. .SS Synopsis .sp Start a session with \fI\%mongoc_client_start_session()\fP, use the session for a sequence of operations and multi\-document transactions, then free it with \fI\%mongoc_client_session_destroy()\fP\&. Any \fI\%mongoc_cursor_t\fP or \fI\%mongoc_change_stream_t\fP using a session must be destroyed before the session, and a session must be destroyed before the \fI\%mongoc_client_t\fP it came from. .sp By default, sessions are \fI\%causally consistent\fP\&. To disable causal consistency, before starting a session create a \fI\%mongoc_session_opt_t\fP with \fI\%mongoc_session_opts_new()\fP and call \fI\%mongoc_session_opts_set_causal_consistency()\fP, then free the struct with \fI\%mongoc_session_opts_destroy()\fP\&. .sp Unacknowledged writes are prohibited with sessions. .sp A \fI\%mongoc_client_session_t\fP must be used by only one thread at a time. Due to session pooling, \fI\%mongoc_client_start_session()\fP may return a session that has been idle for some time and is about to be closed after its idle timeout. Use the session within one minute of acquiring it to refresh the session and avoid a timeout. .SS Fork Safety .sp A \fI\%mongoc_client_session_t\fP is only usable in the parent process after a fork. The child process must call \fI\%mongoc_client_reset()\fP on the \fBclient\fP field. .SS Example .sp example\-session.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-session.c \-o example\-session \e * $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) */ /* ./example\-session [CONNECTION_STRING] */ #include #include int main (int argc, char *argv[]) { int exit_code = EXIT_FAILURE; mongoc_client_t *client = NULL; const char *uri_string = \(dqmongodb://127.0.0.1/?appname=session\-example\(dq; mongoc_uri_t *uri = NULL; mongoc_client_session_t *client_session = NULL; mongoc_collection_t *collection = NULL; bson_error_t error; bson_t *selector = NULL; bson_t *update = NULL; bson_t *update_opts = NULL; bson_t *find_opts = NULL; mongoc_read_prefs_t *secondary = NULL; mongoc_cursor_t *cursor = NULL; const bson_t *doc; char *str; bool r; mongoc_init (); if (argc > 1) { uri_string = argv[1]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); goto done; } client = mongoc_client_new_from_uri (uri); if (!client) { goto done; } mongoc_client_set_error_api (client, 2); /* pass NULL for options \- by default the session is causally consistent */ client_session = mongoc_client_start_session (client, NULL, &error); if (!client_session) { fprintf (stderr, \(dqFailed to start session: %s\en\(dq, error.message); goto done; } collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqcollection\(dq); selector = BCON_NEW (\(dq_id\(dq, BCON_INT32 (1)); update = BCON_NEW (\(dq$inc\(dq, \(dq{\(dq, \(dqx\(dq, BCON_INT32 (1), \(dq}\(dq); update_opts = bson_new (); if (!mongoc_client_session_append (client_session, update_opts, &error)) { fprintf (stderr, \(dqCould not add session to opts: %s\en\(dq, error.message); goto done; } r = mongoc_collection_update_one ( collection, selector, update, update_opts, NULL /* reply */, &error); if (!r) { fprintf (stderr, \(dqUpdate failed: %s\en\(dq, error.message); goto done; } bson_destroy (selector); selector = BCON_NEW (\(dq_id\(dq, BCON_INT32 (1)); secondary = mongoc_read_prefs_new (MONGOC_READ_SECONDARY); find_opts = BCON_NEW (\(dqmaxTimeMS\(dq, BCON_INT32 (2000)); if (!mongoc_client_session_append (client_session, find_opts, &error)) { fprintf (stderr, \(dqCould not add session to opts: %s\en\(dq, error.message); goto done; }; /* read from secondary. since we\(aqre in a causally consistent session, the * data is guaranteed to reflect the update we did on the primary. the query * blocks waiting for the secondary to catch up, if necessary, or times out * and fails after 2000 ms. */ cursor = mongoc_collection_find_with_opts ( collection, selector, find_opts, secondary); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_json (doc, NULL); fprintf (stdout, \(dq%s\en\(dq, str); bson_free (str); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqCursor Failure: %s\en\(dq, error.message); goto done; } exit_code = EXIT_SUCCESS; done: if (find_opts) { bson_destroy (find_opts); } if (update) { bson_destroy (update); } if (selector) { bson_destroy (selector); } if (update_opts) { bson_destroy (update_opts); } if (secondary) { mongoc_read_prefs_destroy (secondary); } /* destroy cursor, collection, session before the client they came from */ if (cursor) { mongoc_cursor_destroy (cursor); } if (collection) { mongoc_collection_destroy (collection); } if (client_session) { mongoc_client_session_destroy (client_session); } if (uri) { mongoc_uri_destroy (uri); } if (client) { mongoc_client_destroy (client); } mongoc_cleanup (); return exit_code; } .EE .UNINDENT .UNINDENT .SS mongoc_client_session_with_transaction_cb_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef bool (*mongoc_client_session_with_transaction_cb_t) ( mongoc_client_session_t *session, void *ctx, bson_t **reply, bson_error_t *error); .EE .UNINDENT .UNINDENT .sp Provide this callback to \fI\%mongoc_client_session_with_transaction()\fP\&. The callback should run a sequence of operations meant to be contained within a transaction. The callback should not attempt to start or commit transactions. .SS Parameters .INDENT 0.0 .IP \(bu 2 \fBsession\fP: A \fI\%mongoc_client_session_t\fP\&. .IP \(bu 2 \fBctx\fP: A \fBvoid*\fP set to the the user\-provided \fBctx\fP passed to \fI\%mongoc_client_session_with_transaction()\fP\&. .IP \(bu 2 \fBreply\fP: An optional location for a \fI\%bson_t\fP or \fBNULL\fP\&. The callback should set this if it runs any operations against the server and receives replies. .IP \(bu 2 \fBerror\fP: A \fI\%bson_error_t\fP\&. The callback should set this if it receives any errors while running operations against the server. .UNINDENT .SS Return .sp Returns \fBtrue\fP for success and \fBfalse\fP on failure. If \fBcb\fP returns \fBfalse\fP then it should also set \fBerror\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_session_with_transaction()\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_client_t .sp A single\-threaded MongoDB connection. See \fI\%Connection Pooling\fP\&. .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_client_t mongoc_client_t; typedef mongoc_stream_t *(*mongoc_stream_initiator_t) ( const mongoc_uri_t *uri, const mongoc_host_list_t *host, void *user_data, bson_error_t *error); .EE .UNINDENT .UNINDENT .sp \fBmongoc_client_t\fP is an opaque type that provides access to a MongoDB server, replica set, or sharded cluster. It maintains management of underlying sockets and routing to individual nodes based on \fI\%mongoc_read_prefs_t\fP or \fI\%mongoc_write_concern_t\fP\&. .SS Streams .sp The underlying transport for a given client can be customized, wrapped or replaced by any implementation that fulfills \fI\%mongoc_stream_t\fP\&. A custom transport can be set with \fI\%mongoc_client_set_stream_initiator()\fP\&. .SS Thread Safety .sp \fBmongoc_client_t\fP is \fINOT\fP thread\-safe and should only be used from one thread at a time. When used in multi\-threaded scenarios, it is recommended that you use the thread\-safe \fI\%mongoc_client_pool_t\fP to retrieve a \fBmongoc_client_t\fP for your thread. .SS Fork Safety .sp A \fI\%mongoc_client_t\fP is only usable in the parent process after a fork. The child process must call \fI\%mongoc_client_reset()\fP\&. .SS Example .sp example\-client.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-client.c \-o example\-client $(pkg\-config \-\-cflags \-\-libs * libmongoc\-1.0) */ /* ./example\-client [CONNECTION_STRING [COLLECTION_NAME]] */ #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_collection_t *collection; mongoc_cursor_t *cursor; bson_error_t error; const bson_t *doc; const char *collection_name = \(dqtest\(dq; bson_t query; char *str; const char *uri_string = \(dqmongodb://127.0.0.1/?appname=client\-example\(dq; mongoc_uri_t *uri; mongoc_init (); if (argc > 1) { uri_string = argv[1]; } if (argc > 2) { collection_name = argv[2]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); bson_init (&query); collection = mongoc_client_get_collection (client, \(dqtest\(dq, collection_name); cursor = mongoc_collection_find_with_opts ( collection, &query, NULL, /* additional options */ NULL); /* read prefs, NULL for default */ while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); fprintf (stdout, \(dq%s\en\(dq, str); bson_free (str); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqCursor Failure: %s\en\(dq, error.message); return EXIT_FAILURE; } bson_destroy (&query); mongoc_cursor_destroy (cursor); mongoc_collection_destroy (collection); mongoc_uri_destroy (uri); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .SS mongoc_collection_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_collection_t mongoc_collection_t; .EE .UNINDENT .UNINDENT .sp \fBmongoc_collection_t\fP provides access to a MongoDB collection. This handle is useful for actions for most CRUD operations, I.e. insert, update, delete, find, etc. .SS Read Preferences and Write Concerns .sp Read preferences and write concerns are inherited from the parent client. They can be overridden by set_* commands if so desired. .SS mongoc_cursor_t .sp Client\-side cursor abstraction .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_cursor_t mongoc_cursor_t; .EE .UNINDENT .UNINDENT .sp \fBmongoc_cursor_t\fP provides access to a MongoDB query cursor. It wraps up the wire protocol negotiation required to initiate a query and retrieve an unknown number of documents. .sp Common cursor operations include: .INDENT 0.0 .IP \(bu 2 Determine which host we\(aqve connected to with \fI\%mongoc_cursor_get_host()\fP\&. .IP \(bu 2 Retrieve more records with repeated calls to \fI\%mongoc_cursor_next()\fP\&. .IP \(bu 2 Clone a query to repeat execution at a later point with \fI\%mongoc_cursor_clone()\fP\&. .IP \(bu 2 Test for errors with \fI\%mongoc_cursor_error()\fP\&. .UNINDENT .sp Cursors are lazy, meaning that no connection is established and no network traffic occurs until the first call to \fI\%mongoc_cursor_next()\fP\&. .SS Thread Safety .sp \fBmongoc_cursor_t\fP is \fINOT\fP thread safe. It may only be used from within the thread in which it was created. .SS Example .sp Query MongoDB and iterate results .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-client.c \-o example\-client $(pkg\-config \-\-cflags \-\-libs * libmongoc\-1.0) */ /* ./example\-client [CONNECTION_STRING [COLLECTION_NAME]] */ #include #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_collection_t *collection; mongoc_cursor_t *cursor; bson_error_t error; const bson_t *doc; const char *collection_name = \(dqtest\(dq; bson_t query; char *str; const char *uri_string = \(dqmongodb://127.0.0.1/?appname=client\-example\(dq; mongoc_uri_t *uri; mongoc_init (); if (argc > 1) { uri_string = argv[1]; } if (argc > 2) { collection_name = argv[2]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); bson_init (&query); collection = mongoc_client_get_collection (client, \(dqtest\(dq, collection_name); cursor = mongoc_collection_find_with_opts ( collection, &query, NULL, /* additional options */ NULL); /* read prefs, NULL for default */ while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); fprintf (stdout, \(dq%s\en\(dq, str); bson_free (str); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqCursor Failure: %s\en\(dq, error.message); return EXIT_FAILURE; } bson_destroy (&query); mongoc_cursor_destroy (cursor); mongoc_collection_destroy (collection); mongoc_uri_destroy (uri); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .SS mongoc_database_t .sp MongoDB Database Abstraction .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_database_t mongoc_database_t; .EE .UNINDENT .UNINDENT .sp \fBmongoc_database_t\fP provides access to a MongoDB database. This handle is useful for actions a particular database object. It \fIis not\fP a container for \fI\%mongoc_collection_t\fP structures. .sp Read preferences and write concerns are inherited from the parent client. They can be overridden with \fI\%mongoc_database_set_read_prefs()\fP and \fI\%mongoc_database_set_write_concern()\fP\&. .SS Examples .INDENT 0.0 .INDENT 3.5 .sp .EX #include int main (int argc, char *argv[]) { mongoc_database_t *database; mongoc_client_t *client; mongoc_init (); client = mongoc_client_new (\(dqmongodb://localhost/\(dq); database = mongoc_client_get_database (client, \(dqtest\(dq); mongoc_database_destroy (database); mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .SS mongoc_delete_flags_t .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 Deprecated since version 1.9.0: These flags are deprecated and should not be used in new code. .sp Please use \fI\%mongoc_collection_delete_one()\fP or \fI\%mongoc_collection_delete_many()\fP in new code. .UNINDENT .UNINDENT .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_DELETE_NONE = 0, MONGOC_DELETE_SINGLE_REMOVE = 1 << 0, } mongoc_delete_flags_t; .EE .UNINDENT .UNINDENT .sp Flags for deletion operations .SS mongoc_find_and_modify_opts_t .sp find_and_modify abstraction .SS Synopsis .sp \fBmongoc_find_and_modify_opts_t\fP is a builder interface to construct \fI\%the findAndModify command\fP\&. .sp It was created to be able to accommodate new arguments to \fI\%the findAndModify command\fP\&. .sp As of MongoDB 3.2, the \fI\%mongoc_write_concern_t\fP specified on the \fI\%mongoc_collection_t\fP will be used, if any. .SS Example .sp flags.c .INDENT 0.0 .INDENT 3.5 .sp .EX void fam_flags (mongoc_collection_t *collection) { mongoc_find_and_modify_opts_t *opts; bson_t reply; bson_error_t error; bson_t query = BSON_INITIALIZER; bson_t *update; bool success; /* Find Zlatan Ibrahimovic, the striker */ BSON_APPEND_UTF8 (&query, \(dqfirstname\(dq, \(dqZlatan\(dq); BSON_APPEND_UTF8 (&query, \(dqlastname\(dq, \(dqIbrahimovic\(dq); BSON_APPEND_UTF8 (&query, \(dqprofession\(dq, \(dqFootball player\(dq); BSON_APPEND_INT32 (&query, \(dqage\(dq, 34); BSON_APPEND_INT32 ( &query, \(dqgoals\(dq, (16 + 35 + 23 + 57 + 16 + 14 + 28 + 84) + (1 + 6 + 62)); /* Add his football position */ update = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqposition\(dq, BCON_UTF8 (\(dqstriker\(dq), \(dq}\(dq); opts = mongoc_find_and_modify_opts_new (); mongoc_find_and_modify_opts_set_update (opts, update); /* Create the document if it didn\(aqt exist, and return the updated document */ mongoc_find_and_modify_opts_set_flags ( opts, MONGOC_FIND_AND_MODIFY_UPSERT | MONGOC_FIND_AND_MODIFY_RETURN_NEW); success = mongoc_collection_find_and_modify_with_opts ( collection, &query, opts, &reply, &error); if (success) { char *str; str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } else { fprintf ( stderr, \(dqGot error: \e\(dq%s\e\(dq on line %d\en\(dq, error.message, __LINE__); } bson_destroy (&reply); bson_destroy (update); bson_destroy (&query); mongoc_find_and_modify_opts_destroy (opts); } .EE .UNINDENT .UNINDENT .sp bypass.c .INDENT 0.0 .INDENT 3.5 .sp .EX void fam_bypass (mongoc_collection_t *collection) { mongoc_find_and_modify_opts_t *opts; bson_t reply; bson_t *update; bson_error_t error; bson_t query = BSON_INITIALIZER; bool success; /* Find Zlatan Ibrahimovic, the striker */ BSON_APPEND_UTF8 (&query, \(dqfirstname\(dq, \(dqZlatan\(dq); BSON_APPEND_UTF8 (&query, \(dqlastname\(dq, \(dqIbrahimovic\(dq); BSON_APPEND_UTF8 (&query, \(dqprofession\(dq, \(dqFootball player\(dq); /* Bump his age */ update = BCON_NEW (\(dq$inc\(dq, \(dq{\(dq, \(dqage\(dq, BCON_INT32 (1), \(dq}\(dq); opts = mongoc_find_and_modify_opts_new (); mongoc_find_and_modify_opts_set_update (opts, update); /* He can still play, even though he is pretty old. */ mongoc_find_and_modify_opts_set_bypass_document_validation (opts, true); success = mongoc_collection_find_and_modify_with_opts ( collection, &query, opts, &reply, &error); if (success) { char *str; str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } else { fprintf ( stderr, \(dqGot error: \e\(dq%s\e\(dq on line %d\en\(dq, error.message, __LINE__); } bson_destroy (&reply); bson_destroy (update); bson_destroy (&query); mongoc_find_and_modify_opts_destroy (opts); } .EE .UNINDENT .UNINDENT .sp update.c .INDENT 0.0 .INDENT 3.5 .sp .EX void fam_update (mongoc_collection_t *collection) { mongoc_find_and_modify_opts_t *opts; bson_t *update; bson_t reply; bson_error_t error; bson_t query = BSON_INITIALIZER; bool success; /* Find Zlatan Ibrahimovic */ BSON_APPEND_UTF8 (&query, \(dqfirstname\(dq, \(dqZlatan\(dq); BSON_APPEND_UTF8 (&query, \(dqlastname\(dq, \(dqIbrahimovic\(dq); /* Make him a book author */ update = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqauthor\(dq, BCON_BOOL (true), \(dq}\(dq); opts = mongoc_find_and_modify_opts_new (); /* Note that the document returned is the _previous_ version of the document * To fetch the modified new version, use * mongoc_find_and_modify_opts_set_flags (opts, * MONGOC_FIND_AND_MODIFY_RETURN_NEW); */ mongoc_find_and_modify_opts_set_update (opts, update); success = mongoc_collection_find_and_modify_with_opts ( collection, &query, opts, &reply, &error); if (success) { char *str; str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } else { fprintf ( stderr, \(dqGot error: \e\(dq%s\e\(dq on line %d\en\(dq, error.message, __LINE__); } bson_destroy (&reply); bson_destroy (update); bson_destroy (&query); mongoc_find_and_modify_opts_destroy (opts); } .EE .UNINDENT .UNINDENT .sp fields.c .INDENT 0.0 .INDENT 3.5 .sp .EX void fam_fields (mongoc_collection_t *collection) { mongoc_find_and_modify_opts_t *opts; bson_t fields = BSON_INITIALIZER; bson_t *update; bson_t reply; bson_error_t error; bson_t query = BSON_INITIALIZER; bool success; /* Find Zlatan Ibrahimovic */ BSON_APPEND_UTF8 (&query, \(dqlastname\(dq, \(dqIbrahimovic\(dq); BSON_APPEND_UTF8 (&query, \(dqfirstname\(dq, \(dqZlatan\(dq); /* Return his goal tally */ BSON_APPEND_INT32 (&fields, \(dqgoals\(dq, 1); /* Bump his goal tally */ update = BCON_NEW (\(dq$inc\(dq, \(dq{\(dq, \(dqgoals\(dq, BCON_INT32 (1), \(dq}\(dq); opts = mongoc_find_and_modify_opts_new (); mongoc_find_and_modify_opts_set_update (opts, update); mongoc_find_and_modify_opts_set_fields (opts, &fields); /* Return the new tally */ mongoc_find_and_modify_opts_set_flags (opts, MONGOC_FIND_AND_MODIFY_RETURN_NEW); success = mongoc_collection_find_and_modify_with_opts ( collection, &query, opts, &reply, &error); if (success) { char *str; str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } else { fprintf ( stderr, \(dqGot error: \e\(dq%s\e\(dq on line %d\en\(dq, error.message, __LINE__); } bson_destroy (&reply); bson_destroy (update); bson_destroy (&fields); bson_destroy (&query); mongoc_find_and_modify_opts_destroy (opts); } .EE .UNINDENT .UNINDENT .sp sort.c .INDENT 0.0 .INDENT 3.5 .sp .EX void fam_sort (mongoc_collection_t *collection) { mongoc_find_and_modify_opts_t *opts; bson_t *update; bson_t sort = BSON_INITIALIZER; bson_t reply; bson_error_t error; bson_t query = BSON_INITIALIZER; bool success; /* Find all users with the lastname Ibrahimovic */ BSON_APPEND_UTF8 (&query, \(dqlastname\(dq, \(dqIbrahimovic\(dq); /* Sort by age (descending) */ BSON_APPEND_INT32 (&sort, \(dqage\(dq, \-1); /* Bump his goal tally */ update = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqoldest\(dq, BCON_BOOL (true), \(dq}\(dq); opts = mongoc_find_and_modify_opts_new (); mongoc_find_and_modify_opts_set_update (opts, update); mongoc_find_and_modify_opts_set_sort (opts, &sort); success = mongoc_collection_find_and_modify_with_opts ( collection, &query, opts, &reply, &error); if (success) { char *str; str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } else { fprintf ( stderr, \(dqGot error: \e\(dq%s\e\(dq on line %d\en\(dq, error.message, __LINE__); } bson_destroy (&reply); bson_destroy (update); bson_destroy (&sort); bson_destroy (&query); mongoc_find_and_modify_opts_destroy (opts); } .EE .UNINDENT .UNINDENT .sp opts.c .INDENT 0.0 .INDENT 3.5 .sp .EX void fam_opts (mongoc_collection_t *collection) { mongoc_find_and_modify_opts_t *opts; bson_t reply; bson_t *update; bson_error_t error; bson_t query = BSON_INITIALIZER; mongoc_write_concern_t *wc; bson_t extra = BSON_INITIALIZER; bool success; /* Find Zlatan Ibrahimovic, the striker */ BSON_APPEND_UTF8 (&query, \(dqfirstname\(dq, \(dqZlatan\(dq); BSON_APPEND_UTF8 (&query, \(dqlastname\(dq, \(dqIbrahimovic\(dq); BSON_APPEND_UTF8 (&query, \(dqprofession\(dq, \(dqFootball player\(dq); /* Bump his age */ update = BCON_NEW (\(dq$inc\(dq, \(dq{\(dq, \(dqage\(dq, BCON_INT32 (1), \(dq}\(dq); opts = mongoc_find_and_modify_opts_new (); mongoc_find_and_modify_opts_set_update (opts, update); /* Abort if the operation takes too long. */ mongoc_find_and_modify_opts_set_max_time_ms (opts, 100); /* Set write concern w: 2 */ wc = mongoc_write_concern_new (); mongoc_write_concern_set_w (wc, 2); mongoc_write_concern_append (wc, &extra); /* Some future findAndModify option the driver doesn\(aqt support conveniently */ BSON_APPEND_INT32 (&extra, \(dqfutureOption\(dq, 42); mongoc_find_and_modify_opts_append (opts, &extra); success = mongoc_collection_find_and_modify_with_opts ( collection, &query, opts, &reply, &error); if (success) { char *str; str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } else { fprintf ( stderr, \(dqGot error: \e\(dq%s\e\(dq on line %d\en\(dq, error.message, __LINE__); } bson_destroy (&reply); bson_destroy (&extra); bson_destroy (update); bson_destroy (&query); mongoc_write_concern_destroy (wc); mongoc_find_and_modify_opts_destroy (opts); } .EE .UNINDENT .UNINDENT .sp fam.c .INDENT 0.0 .INDENT 3.5 .sp .EX int main (void) { mongoc_collection_t *collection; mongoc_database_t *database; mongoc_client_t *client; const char *uri_string = \(dqmongodb://localhost:27017/admin?appname=find\-and\-modify\-opts\-example\(dq; mongoc_uri_t *uri; bson_error_t error; bson_t *options; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqdatabaseName\(dq); options = BCON_NEW (\(dqvalidator\(dq, \(dq{\(dq, \(dqage\(dq, \(dq{\(dq, \(dq$lte\(dq, BCON_INT32 (34), \(dq}\(dq, \(dq}\(dq, \(dqvalidationAction\(dq, BCON_UTF8 (\(dqerror\(dq), \(dqvalidationLevel\(dq, BCON_UTF8 (\(dqmoderate\(dq)); collection = mongoc_database_create_collection ( database, \(dqcollectionName\(dq, options, &error); if (!collection) { fprintf ( stderr, \(dqGot error: \e\(dq%s\e\(dq on line %d\en\(dq, error.message, __LINE__); return EXIT_FAILURE; } fam_flags (collection); fam_bypass (collection); fam_update (collection); fam_fields (collection); fam_opts (collection); fam_sort (collection); mongoc_collection_drop (collection, NULL); bson_destroy (options); mongoc_uri_destroy (uri); mongoc_database_destroy (database); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Outputs: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqlastErrorObject\(dq: { \(dqupdatedExisting\(dq: false, \(dqn\(dq: 1, \(dqupserted\(dq: { \(dq$oid\(dq: \(dq56562a99d13e6d86239c7b00\(dq } }, \(dqvalue\(dq: { \(dq_id\(dq: { \(dq$oid\(dq: \(dq56562a99d13e6d86239c7b00\(dq }, \(dqage\(dq: 34, \(dqfirstname\(dq: \(dqZlatan\(dq, \(dqgoals\(dq: 342, \(dqlastname\(dq: \(dqIbrahimovic\(dq, \(dqprofession\(dq: \(dqFootball player\(dq, \(dqposition\(dq: \(dqstriker\(dq }, \(dqok\(dq: 1 } { \(dqlastErrorObject\(dq: { \(dqupdatedExisting\(dq: true, \(dqn\(dq: 1 }, \(dqvalue\(dq: { \(dq_id\(dq: { \(dq$oid\(dq: \(dq56562a99d13e6d86239c7b00\(dq }, \(dqage\(dq: 34, \(dqfirstname\(dq: \(dqZlatan\(dq, \(dqgoals\(dq: 342, \(dqlastname\(dq: \(dqIbrahimovic\(dq, \(dqprofession\(dq: \(dqFootball player\(dq, \(dqposition\(dq: \(dqstriker\(dq }, \(dqok\(dq: 1 } { \(dqlastErrorObject\(dq: { \(dqupdatedExisting\(dq: true, \(dqn\(dq: 1 }, \(dqvalue\(dq: { \(dq_id\(dq: { \(dq$oid\(dq: \(dq56562a99d13e6d86239c7b00\(dq }, \(dqage\(dq: 35, \(dqfirstname\(dq: \(dqZlatan\(dq, \(dqgoals\(dq: 342, \(dqlastname\(dq: \(dqIbrahimovic\(dq, \(dqprofession\(dq: \(dqFootball player\(dq, \(dqposition\(dq: \(dqstriker\(dq }, \(dqok\(dq: 1 } { \(dqlastErrorObject\(dq: { \(dqupdatedExisting\(dq: true, \(dqn\(dq: 1 }, \(dqvalue\(dq: { \(dq_id\(dq: { \(dq$oid\(dq: \(dq56562a99d13e6d86239c7b00\(dq }, \(dqgoals\(dq: 343 }, \(dqok\(dq: 1 } { \(dqlastErrorObject\(dq: { \(dqupdatedExisting\(dq: true, \(dqn\(dq: 1 }, \(dqvalue\(dq: { \(dq_id\(dq: { \(dq$oid\(dq: \(dq56562a99d13e6d86239c7b00\(dq }, \(dqage\(dq: 35, \(dqfirstname\(dq: \(dqZlatan\(dq, \(dqgoals\(dq: 343, \(dqlastname\(dq: \(dqIbrahimovic\(dq, \(dqprofession\(dq: \(dqFootball player\(dq, \(dqposition\(dq: \(dqstriker\(dq, \(dqauthor\(dq: true }, \(dqok\(dq: 1 } .EE .UNINDENT .UNINDENT .SS mongoc_gridfs_file_list_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_gridfs_file_list_t mongoc_gridfs_file_list_t; .EE .UNINDENT .UNINDENT .SS Description .sp \fBmongoc_gridfs_file_list_t\fP provides a gridfs file list abstraction. It provides iteration and basic marshalling on top of a regular \fI\%mongoc_collection_find_with_opts()\fP style query. In interface, it\(aqs styled after \fI\%mongoc_cursor_t\fP\&. .SS Example .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_gridfs_file_list_t *list; mongoc_gridfs_file_t *file; list = mongoc_gridfs_find (gridfs, query); while ((file = mongoc_gridfs_file_list_next (list))) { do_something (file); mongoc_gridfs_file_destroy (file); } mongoc_gridfs_file_list_destroy (list); .EE .UNINDENT .UNINDENT .SS mongoc_gridfs_file_opt_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct { const char *md5; const char *filename; const char *content_type; const bson_t *aliases; const bson_t *metadata; uint32_t chunk_size; } mongoc_gridfs_file_opt_t; .EE .UNINDENT .UNINDENT .SS Description .sp This structure contains options that can be set on a \fI\%mongoc_gridfs_file_t\fP\&. It can be used by various functions when creating a new gridfs file. .SS mongoc_gridfs_file_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_gridfs_file_t mongoc_gridfs_file_t; .EE .UNINDENT .UNINDENT .SS Description .sp This structure provides a MongoDB GridFS file abstraction. It provides several APIs. .INDENT 0.0 .IP \(bu 2 readv, writev, seek, and tell. .IP \(bu 2 General file metadata such as filename and length. .IP \(bu 2 GridFS metadata such as md5, filename, content_type, aliases, metadata, chunk_size, and upload_date. .UNINDENT .SS Thread Safety .sp This structure is NOT thread\-safe and should only be used from one thread at a time. .SS Related .INDENT 0.0 .IP \(bu 2 \fI\%mongoc_client_t\fP .IP \(bu 2 \fI\%mongoc_gridfs_t\fP .IP \(bu 2 \fI\%mongoc_gridfs_file_list_t\fP .IP \(bu 2 \fI\%mongoc_gridfs_file_opt_t\fP .UNINDENT .SS mongoc_gridfs_bucket_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_gridfs_bucket_t mongoc_gridfs_bucket_t; .EE .UNINDENT .UNINDENT .SS Description .sp \fBmongoc_gridfs_bucket_t\fP provides a spec\-compliant MongoDB GridFS implementation, superseding \fI\%mongoc_gridfs_t\fP\&. See the \fI\%GridFS MongoDB documentation\fP\&. .SS Thread Safety .sp \fI\%mongoc_gridfs_bucket_t\fP is NOT thread\-safe and should only be used in the same thread as the owning \fI\%mongoc_client_t\fP\&. .SS Lifecycle .sp It is an error to free a \fI\%mongoc_gridfs_bucket_t\fP before freeing all derived instances of \fI\%mongoc_stream_t\fP\&. The owning \fI\%mongoc_client_t\fP must outlive the \fI\%mongoc_gridfs_bucket_t\fP\&. .SS Example .sp example\-gridfs\-bucket.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include int main (int argc, char *argv[]) { const char *uri_string = \(dqmongodb://localhost:27017/?appname=new\-gridfs\-example\(dq; mongoc_client_t *client; mongoc_database_t *db; mongoc_stream_t *file_stream; mongoc_gridfs_bucket_t *bucket; mongoc_cursor_t *cursor; bson_t filter; bool res; bson_value_t file_id; bson_error_t error; const bson_t *doc; char *str; mongoc_init (); if (argc != 3) { fprintf (stderr, \(dqusage: %s SOURCE_FILE_PATH FILE_COPY_PATH\en\(dq, argv[0]); return EXIT_FAILURE; } /* 1. Make a bucket. */ client = mongoc_client_new (uri_string); db = mongoc_client_get_database (client, \(dqtest\(dq); bucket = mongoc_gridfs_bucket_new (db, NULL, NULL, &error); if (!bucket) { printf (\(dqError creating gridfs bucket: %s\en\(dq, error.message); return EXIT_FAILURE; } /* 2. Insert a file. */ file_stream = mongoc_stream_file_new_for_path (argv[1], O_RDONLY, 0); res = mongoc_gridfs_bucket_upload_from_stream ( bucket, \(dqmy\-file\(dq, file_stream, NULL, &file_id, &error); if (!res) { printf (\(dqError uploading file: %s\en\(dq, error.message); return EXIT_FAILURE; } mongoc_stream_close (file_stream); mongoc_stream_destroy (file_stream); /* 3. Download the file in GridFS to a local file. */ file_stream = mongoc_stream_file_new_for_path (argv[2], O_CREAT | O_RDWR, 0); if (!file_stream) { perror (\(dqError opening file stream\(dq); return EXIT_FAILURE; } res = mongoc_gridfs_bucket_download_to_stream ( bucket, &file_id, file_stream, &error); if (!res) { printf (\(dqError downloading file to stream: %s\en\(dq, error.message); return EXIT_FAILURE; } mongoc_stream_close (file_stream); mongoc_stream_destroy (file_stream); /* 4. List what files are available in GridFS. */ bson_init (&filter); cursor = mongoc_gridfs_bucket_find (bucket, &filter, NULL); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } /* 5. Delete the file that we added. */ res = mongoc_gridfs_bucket_delete_by_id (bucket, &file_id, &error); if (!res) { printf (\(dqError deleting the file: %s\en\(dq, error.message); return EXIT_FAILURE; } /* 6. Cleanup. */ mongoc_stream_close (file_stream); mongoc_stream_destroy (file_stream); mongoc_cursor_destroy (cursor); bson_destroy (&filter); mongoc_gridfs_bucket_destroy (bucket); mongoc_database_destroy (db); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf The \fI\%MongoDB GridFS specification\fP\&. .fi .sp .nf The non spec\-compliant \fI\%mongoc_gridfs_t\fP\&. .fi .sp .UNINDENT .UNINDENT .SS mongoc_gridfs_t .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 This GridFS implementation does not conform to the \fI\%MongoDB GridFS specification\fP\&. For a spec compliant implementation, use \fI\%mongoc_gridfs_bucket_t\fP\&. .UNINDENT .UNINDENT .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_gridfs_t mongoc_gridfs_t; .EE .UNINDENT .UNINDENT .SS Description .sp \fBmongoc_gridfs_t\fP provides a MongoDB gridfs implementation. The system as a whole is made up of \fBgridfs\fP objects, which contain \fBgridfs_files\fP and \fBgridfs_file_lists\fP\&. Essentially, a basic file system API. .sp There are extensive caveats about the kind of use cases gridfs is practical for. In particular, any writing after initial file creation is likely to both break any concurrent readers and be quite expensive. That said, this implementation does allow for arbitrary writes to existing gridfs object, just use them with caution. .sp mongoc_gridfs also integrates tightly with the \fI\%mongoc_stream_t\fP abstraction, which provides some convenient wrapping for file creation and reading/writing. It can be used without, but its worth looking to see if your problem can fit that model. .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 \fBmongoc_gridfs_t\fP does not support read preferences. In a replica set, GridFS queries are always routed to the primary. .UNINDENT .UNINDENT .SS Thread Safety .sp \fBmongoc_gridfs_t\fP is NOT thread\-safe and should only be used in the same thread as the owning \fI\%mongoc_client_t\fP\&. .SS Lifecycle .sp It is an error to free a \fBmongoc_gridfs_t\fP before freeing all related instances of \fI\%mongoc_gridfs_file_t\fP and \fI\%mongoc_gridfs_file_list_t\fP\&. .SS Example .sp example\-gridfs.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include #include int main (int argc, char *argv[]) { mongoc_gridfs_t *gridfs; mongoc_gridfs_file_t *file; mongoc_gridfs_file_list_t *list; mongoc_gridfs_file_opt_t opt = {0}; mongoc_client_t *client; const char *uri_string = \(dqmongodb://127.0.0.1:27017/?appname=gridfs\-example\(dq; mongoc_uri_t *uri; mongoc_stream_t *stream; bson_t filter; bson_t opts; bson_t child; bson_error_t error; ssize_t r; char buf[4096]; mongoc_iovec_t iov; const char *filename; const char *command; bson_value_t id; if (argc < 2) { fprintf (stderr, \(dqusage \- %s command ...\en\(dq, argv[0]); return EXIT_FAILURE; } mongoc_init (); iov.iov_base = (void *) buf; iov.iov_len = sizeof buf; /* connect to localhost client */ uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); assert (client); mongoc_client_set_error_api (client, 2); /* grab a gridfs handle in test prefixed by fs */ gridfs = mongoc_client_get_gridfs (client, \(dqtest\(dq, \(dqfs\(dq, &error); assert (gridfs); command = argv[1]; filename = argv[2]; if (strcmp (command, \(dqread\(dq) == 0) { if (argc != 3) { fprintf (stderr, \(dqusage \- %s read filename\en\(dq, argv[0]); return EXIT_FAILURE; } file = mongoc_gridfs_find_one_by_filename (gridfs, filename, &error); assert (file); stream = mongoc_stream_gridfs_new (file); assert (stream); for (;;) { r = mongoc_stream_readv (stream, &iov, 1, \-1, 0); assert (r >= 0); if (r == 0) { break; } if (fwrite (iov.iov_base, 1, r, stdout) != r) { MONGOC_ERROR (\(dqFailed to write to stdout. Exiting.\en\(dq); exit (1); } } mongoc_stream_destroy (stream); mongoc_gridfs_file_destroy (file); } else if (strcmp (command, \(dqlist\(dq) == 0) { bson_init (&filter); bson_init (&opts); bson_append_document_begin (&opts, \(dqsort\(dq, \-1, &child); BSON_APPEND_INT32 (&child, \(dqfilename\(dq, 1); bson_append_document_end (&opts, &child); list = mongoc_gridfs_find_with_opts (gridfs, &filter, &opts); bson_destroy (&filter); bson_destroy (&opts); while ((file = mongoc_gridfs_file_list_next (list))) { const char *name = mongoc_gridfs_file_get_filename (file); printf (\(dq%s\en\(dq, name ? name : \(dq?\(dq); mongoc_gridfs_file_destroy (file); } mongoc_gridfs_file_list_destroy (list); } else if (strcmp (command, \(dqwrite\(dq) == 0) { if (argc != 4) { fprintf (stderr, \(dqusage \- %s write filename input_file\en\(dq, argv[0]); return EXIT_FAILURE; } stream = mongoc_stream_file_new_for_path (argv[3], O_RDONLY, 0); assert (stream); opt.filename = filename; /* the driver generates a file_id for you */ file = mongoc_gridfs_create_file_from_stream (gridfs, stream, &opt); assert (file); id.value_type = BSON_TYPE_INT32; id.value.v_int32 = 1; /* optional: the following method specifies a file_id of any BSON type */ if (!mongoc_gridfs_file_set_id (file, &id, &error)) { fprintf (stderr, \(dq%s\en\(dq, error.message); return EXIT_FAILURE; } if (!mongoc_gridfs_file_save (file)) { mongoc_gridfs_file_error (file, &error); fprintf (stderr, \(dqCould not save: %s\en\(dq, error.message); return EXIT_FAILURE; } mongoc_gridfs_file_destroy (file); } else { fprintf (stderr, \(dqUnknown command\(dq); return EXIT_FAILURE; } mongoc_gridfs_destroy (gridfs); mongoc_uri_destroy (uri); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf The \fI\%MongoDB GridFS specification\fP\&. .fi .sp .nf The spec\-compliant \fI\%mongoc_gridfs_bucket_t\fP\&. .fi .sp .UNINDENT .UNINDENT .SS mongoc_host_list_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct { mongoc_host_list_t *next; char host[BSON_HOST_NAME_MAX + 1]; char host_and_port[BSON_HOST_NAME_MAX + 7]; uint16_t port; int family; void *padding[4]; } mongoc_host_list_t; .EE .UNINDENT .UNINDENT .SS Description .sp The host and port of a MongoDB server. Can be part of a linked list: for example the return value of \fI\%mongoc_uri_get_hosts()\fP when multiple hosts are provided in the MongoDB URI. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_uri_get_hosts()\fP and \fI\%mongoc_cursor_get_host()\fP\&. .fi .sp .UNINDENT .UNINDENT .SS mongoc_index_opt_geo_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct { uint8_t twod_sphere_version; uint8_t twod_bits_precision; double twod_location_min; double twod_location_max; double haystack_bucket_size; uint8_t *padding[32]; } mongoc_index_opt_geo_t; .EE .UNINDENT .UNINDENT .SS Description .sp This structure contains the options that may be used for tuning a GEO index. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_index_opt_t\fP .fi .sp .nf \fI\%mongoc_index_opt_wt_t\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_index_opt_t .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 Deprecated since version 1.8.0: This structure is deprecated and should not be used in new code. See \fI\%Manage Collection Indexes\fP\&. .UNINDENT .UNINDENT .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct { bool is_initialized; bool background; bool unique; const char *name; bool drop_dups; bool sparse; int32_t expire_after_seconds; int32_t v; const bson_t *weights; const char *default_language; const char *language_override; mongoc_index_opt_geo_t *geo_options; mongoc_index_opt_storage_t *storage_options; const bson_t *partial_filter_expression; const bson_t *collation; void *padding[4]; } mongoc_index_opt_t; .EE .UNINDENT .UNINDENT .SS Description .sp This structure contains the options that may be used for tuning a specific index. .sp See the \fI\%createIndexes documentations\fP in the MongoDB manual for descriptions of individual options. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 dropDups is deprecated as of MongoDB version 3.0.0. This option is silently ignored by the server and unique index builds using this option will fail if a duplicate value is detected. .UNINDENT .UNINDENT .SS Example .INDENT 0.0 .INDENT 3.5 .sp .EX { bson_t keys; bson_error_t error; mongoc_index_opt_t opt; mongoc_index_opt_geo_t geo_opt; mongoc_index_opt_init (&opt); mongoc_index_opt_geo_init (&geo_opt); bson_init (&keys); BSON_APPEND_UTF8 (&keys, \(dqlocation\(dq, \(dq2d\(dq); geo_opt.twod_location_min = \-123; geo_opt.twod_location_max = +123; geo_opt.twod_bits_precision = 30; opt.geo_options = &geo_opt; collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqgeo_test\(dq); if (mongoc_collection_create_index (collection, &keys, &opt, &error)) { /* Successfully created the geo index */ } bson_destroy (&keys); mongoc_collection_destroy (&collection); } .EE .UNINDENT .UNINDENT .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_index_opt_geo_t\fP .fi .sp .nf \fI\%mongoc_index_opt_wt_t\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_index_opt_wt_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct { mongoc_index_opt_storage_t base; const char *config_str; void *padding[8]; } mongoc_index_opt_wt_t; .EE .UNINDENT .UNINDENT .SS Description .sp This structure contains the options that may be used for tuning a WiredTiger specific index. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_index_opt_t\fP .fi .sp .nf \fI\%mongoc_index_opt_geo_t\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_insert_flags_t .sp Flags for insert operations .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_INSERT_NONE = 0, MONGOC_INSERT_CONTINUE_ON_ERROR = 1 << 0, } mongoc_insert_flags_t; #define MONGOC_INSERT_NO_VALIDATE (1U << 31) .EE .UNINDENT .UNINDENT .SS Description .sp These flags correspond to the MongoDB wire protocol. They may be bitwise or\(aqd together. They may modify how an insert happens on the MongoDB server. .SS Flag Values .TS center; |l|l|. _ T{ MONGOC_INSERT_NONE T} T{ Specify no insert flags. T} _ T{ MONGOC_INSERT_CONTINUE_ON_ERROR T} T{ Continue inserting documents from the insertion set even if one insert fails. T} _ T{ MONGOC_INSERT_NO_VALIDATE T} T{ Do not validate insertion documents before performing an insert. Validation can be expensive, so this can save some time if you know your documents are already valid. T} _ .TE .SS mongoc_iovec_t .SS Synopsis .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include #ifdef _WIN32 typedef struct { u_long iov_len; char *iov_base; } mongoc_iovec_t; #else typedef struct iovec mongoc_iovec_t; #endif .EE .UNINDENT .UNINDENT .sp The \fBmongoc_iovec_t\fP structure is a portability abstraction for consumers of the \fI\%mongoc_stream_t\fP interfaces. It allows for scatter/gather I/O through the socket subsystem. .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 When writing portable code, beware of the ordering of \fBiov_len\fP and \fBiov_base\fP as they are different on various platforms. Therefore, you should not use C initializers for initialization. .UNINDENT .UNINDENT .SS mongoc_optional_t .sp A struct to store optional boolean values. .SS Synopsis .sp Used to specify optional boolean flags, which may remain unset. .sp This is used within \fI\%mongoc_server_api_t\fP to track whether a flag was explicitly set. .SS mongoc_query_flags_t .sp Flags for query operations .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_QUERY_NONE = 0, MONGOC_QUERY_TAILABLE_CURSOR = 1 << 1, MONGOC_QUERY_SECONDARY_OK = 1 << 2, MONGOC_QUERY_OPLOG_REPLAY = 1 << 3, MONGOC_QUERY_NO_CURSOR_TIMEOUT = 1 << 4, MONGOC_QUERY_AWAIT_DATA = 1 << 5, MONGOC_QUERY_EXHAUST = 1 << 6, MONGOC_QUERY_PARTIAL = 1 << 7, } mongoc_query_flags_t; .EE .UNINDENT .UNINDENT .SS Description .sp These flags correspond to the MongoDB wire protocol. They may be bitwise or\(aqd together. They may modify how a query is performed in the MongoDB server. .SS Flag Values .TS center; |l|l|. _ T{ MONGOC_QUERY_NONE T} T{ Specify no query flags. T} _ T{ MONGOC_QUERY_TAILABLE_CURSOR T} T{ Cursor will not be closed when the last data is retrieved. You can resume this cursor later. T} _ T{ MONGOC_QUERY_SECONDARY_OK T} T{ Allow query of replica set secondaries. T} _ T{ MONGOC_QUERY_OPLOG_REPLAY T} T{ Used internally by MongoDB. T} _ T{ MONGOC_QUERY_NO_CURSOR_TIMEOUT T} T{ The server normally times out an idle cursor after an inactivity period (10 minutes). This prevents that. T} _ T{ MONGOC_QUERY_AWAIT_DATA T} T{ Use with MONGOC_QUERY_TAILABLE_CURSOR. Block rather than returning no data. After a period, time out. T} _ T{ MONGOC_QUERY_EXHAUST T} T{ Stream the data down full blast in multiple \(dqreply\(dq packets. Faster when you are pulling down a lot of data and you know you want to retrieve it all. Only applies to cursors created from a find operation (i.e. \fI\%mongoc_collection_find()\fP). T} _ T{ MONGOC_QUERY_PARTIAL T} T{ Get partial results from mongos if some shards are down (instead of throwing an error). T} _ .TE .SS mongoc_rand .sp MongoDB Random Number Generator .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX void mongoc_rand_add (const void *buf, int num, double entropy); void mongoc_rand_seed (const void *buf, int num); int mongoc_rand_status (void); .EE .UNINDENT .UNINDENT .SS Description .sp The \fBmongoc_rand\fP family of functions provide access to the low level randomness primitives used by the MongoDB C Driver. In particular, they control the creation of cryptographically strong pseudo\-random bytes required by some security mechanisms. .sp While we can usually pull enough entropy from the environment, you may be required to seed the PRNG manually depending on your OS, hardware and other entropy consumers running on the same system. .SS Entropy .sp \fBmongoc_rand_add\fP and \fBmongoc_rand_seed\fP allow the user to directly provide entropy. They differ insofar as \fBmongoc_rand_seed\fP requires that each bit provided is fully random. \fBmongoc_rand_add\fP allows the user to specify the degree of randomness in the provided bytes as well. .SS Status .sp The \fBmongoc_rand_status\fP function allows the user to check the status of the mongoc PRNG. This can be used to guarantee sufficient entropy at program startup, rather than waiting for runtime errors to occur. .SS mongoc_read_concern_t .sp Read Concern abstraction .SS Synopsis .sp New in MongoDB 3.2. .sp The \fBmongoc_read_concern_t\fP allows clients to choose a level of isolation for their reads. The default, MONGOC_READ_CONCERN_LEVEL_LOCAL, is right for the great majority of applications. .sp You can specify a read concern on connection objects, database objects, or collection objects. .sp See \fI\%readConcern\fP on the MongoDB website for more information. .sp Read Concern is only sent to MongoDB when it has explicitly been set by \fI\%mongoc_read_concern_set_level()\fP to anything other than NULL. .SS Read Concern Levels .TS center; |l|l|l|. _ T{ Macro T} T{ Description T} T{ First MongoDB version T} _ T{ MONGOC_READ_CONCERN_LEVEL_LOCAL T} T{ Level \(dqlocal\(dq, the default. T} T{ 3.2 T} _ T{ MONGOC_READ_CONCERN_LEVEL_MAJORITY T} T{ Level \(dqmajority\(dq. T} T{ 3.2 T} _ T{ MONGOC_READ_CONCERN_LEVEL_LINEARIZABLE T} T{ Level \(dqlinearizable\(dq. T} T{ 3.4 T} _ T{ MONGOC_READ_CONCERN_LEVEL_AVAILABLE T} T{ Level \(dqavailable\(dq. T} T{ 3.6 T} _ T{ MONGOC_READ_CONCERN_LEVEL_SNAPSHOT T} T{ Level \(dqsnapshot\(dq. T} T{ 4.0 T} _ .TE .sp For the sake of compatibility with future versions of MongoDB, \fI\%mongoc_read_concern_set_level()\fP allows any string, not just this list of known read concern levels. .sp See \fI\%Read Concern Levels\fP in the MongoDB manual for more information about the individual read concern levels. .SS mongoc_read_mode_t .sp Read Preference Modes .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_READ_PRIMARY = (1 << 0), MONGOC_READ_SECONDARY = (1 << 1), MONGOC_READ_PRIMARY_PREFERRED = (1 << 2) | MONGOC_READ_PRIMARY, MONGOC_READ_SECONDARY_PREFERRED = (1 << 2) | MONGOC_READ_SECONDARY, MONGOC_READ_NEAREST = (1 << 3) | MONGOC_READ_SECONDARY, } mongoc_read_mode_t; .EE .UNINDENT .UNINDENT .SS Description .sp This enum describes how reads should be dispatched. The default is MONGOC_READ_PRIMARY. .sp Please see the MongoDB website for a description of \fI\%Read Preferences\fP\&. .SS mongoc_read_prefs_t .sp A read preference abstraction .SS Synopsis .sp \fI\%mongoc_read_prefs_t\fP provides an abstraction on top of the MongoDB connection read preferences. It allows for hinting to the driver which nodes in a replica set should be accessed first and how. .sp You can specify a read preference mode on connection objects, database objects, collection objects, or per\-operation. Generally, it makes the most sense to stick with the global default mode, \fBMONGOC_READ_PRIMARY\fP\&. All of the other modes come with caveats that won\(aqt be covered in great detail here. .SS Read Modes .TS center; |l|l|. _ T{ MONGOC_READ_PRIMARY T} T{ Default mode. All operations read from the current replica set primary. T} _ T{ MONGOC_READ_SECONDARY T} T{ All operations read from among the nearest secondary members of the replica set. T} _ T{ MONGOC_READ_PRIMARY_PREFERRED T} T{ In most situations, operations read from the primary but if it is unavailable, operations read from secondary members. T} _ T{ MONGOC_READ_SECONDARY_PREFERRED T} T{ In most situations, operations read from among the nearest secondary members, but if no secondaries are available, operations read from the primary. T} _ T{ MONGOC_READ_NEAREST T} T{ Operations read from among the nearest members of the replica set, irrespective of the member\(aqs type. T} _ .TE .SS Tag Sets .sp Tag sets allow you to specify custom read preferences and write concerns so that your application can target operations to specific members. .sp Custom read preferences and write concerns evaluate tags sets in different ways: read preferences consider the value of a tag when selecting a member to read from, while write concerns ignore the value of a tag when selecting a member, except to consider whether or not the value is unique. .sp You can specify tag sets with the following read preference modes: .INDENT 0.0 .IP \(bu 2 primaryPreferred .IP \(bu 2 secondary .IP \(bu 2 secondaryPreferred .IP \(bu 2 nearest .UNINDENT .sp Tags are not compatible with \fBMONGOC_READ_PRIMARY\fP and, in general, only apply when selecting a secondary member of a set for a read operation. However, the nearest read mode, when combined with a tag set, will select the nearest member that matches the specified tag set, which may be a primary or secondary. .sp Tag sets are represented as a comma\-separated list of colon\-separated key\-value pairs when provided as a connection string, e.g. \fIdc:ny,rack:1\fP\&. .sp To specify a list of tag sets, using multiple readPreferenceTags, e.g. .INDENT 0.0 .INDENT 3.5 .sp .EX readPreferenceTags=dc:ny,rack:1;readPreferenceTags=dc:ny;readPreferenceTags= .EE .UNINDENT .UNINDENT .sp Note the empty value for the last one, which means \(dqmatch any secondary as a last resort\(dq. .sp Order matters when using multiple readPreferenceTags. .sp Tag Sets can also be configured using \fI\%mongoc_read_prefs_set_tags()\fP\&. .sp All interfaces use the same member selection logic to choose the member to which to direct read operations, basing the choice on read preference mode and tag sets. .SS Max Staleness .sp When connected to replica set running MongoDB 3.4 or later, the driver estimates the staleness of each secondary based on lastWriteDate values provided in server hello responses. .sp Max Staleness is the maximum replication lag in seconds (wall clock time) that a secondary can suffer and still be eligible for reads. The default is \fBMONGOC_NO_MAX_STALENESS\fP, which disables staleness checks. Otherwise, it must be a positive integer at least \fBMONGOC_SMALLEST_MAX_STALENESS_SECONDS\fP (90 seconds). .sp Max Staleness is also supported by sharded clusters of replica sets if all servers run MongoDB 3.4 or later. .SS Hedged Reads .sp When connecting to a sharded cluster running MongoDB 4.4 or later, reads can be sent in parallel to the two \(dqbest\(dq hosts. Once one result returns, any other outstanding operations that were part of the hedged read are cancelled. .sp When the read preference mode is \fBMONGOC_READ_NEAREST\fP and the sharded cluster is running MongoDB 4.4 or later, hedged reads are enabled by default. Additionally, hedged reads may be explicitly enabled or disabled by calling \fI\%mongoc_read_prefs_set_hedge()\fP with a BSON document, e.g. .INDENT 0.0 .INDENT 3.5 .sp .EX { enabled: true } .EE .UNINDENT .UNINDENT .sp Appropriate values for the \fBenabled\fP key are \fBtrue\fP or \fBfalse\fP\&. .SS mongoc_remove_flags_t .sp Flags for deletion operations .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_REMOVE_NONE = 0, MONGOC_REMOVE_SINGLE_REMOVE = 1 << 0, } mongoc_remove_flags_t; .EE .UNINDENT .UNINDENT .SS Description .sp These flags correspond to the MongoDB wire protocol. They may be bitwise or\(aqd together. They may change the number of documents that are removed during a remove command. .SS Flag Values .TS center; |l|l|. _ T{ MONGOC_REMOVE_NONE T} T{ Specify no removal flags. All matching documents will be removed. T} _ T{ MONGOC_REMOVE_SINGLE_REMOVE T} T{ Only remove the first matching document from the selector. T} _ .TE .SS mongoc_reply_flags_t .sp Flags from server replies .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_REPLY_NONE = 0, MONGOC_REPLY_CURSOR_NOT_FOUND = 1 << 0, MONGOC_REPLY_QUERY_FAILURE = 1 << 1, MONGOC_REPLY_SHARD_CONFIG_STALE = 1 << 2, MONGOC_REPLY_AWAIT_CAPABLE = 1 << 3, } mongoc_reply_flags_t; .EE .UNINDENT .UNINDENT .SS Description .sp These flags correspond to the wire protocol. They may be bitwise or\(aqd together. .SS Flag Values .TS center; |l|l|. _ T{ MONGOC_REPLY_NONE T} T{ No flags set. T} _ T{ MONGOC_REPLY_CURSOR_NOT_FOUND T} T{ No matching cursor was found on the server. T} _ T{ MONGOC_REPLY_QUERY_FAILURE T} T{ The query failed or was invalid. Error document has been provided. T} _ T{ MONGOC_REPLY_SHARD_CONFIG_STALE T} T{ Shard config is stale. T} _ T{ MONGOC_REPLY_AWAIT_CAPABLE T} T{ If the returned cursor is capable of MONGOC_QUERY_AWAIT_DATA. T} _ .TE .SS mongoc_server_api_t .sp A versioned API to use for connections. .SS Synopsis .sp Used to specify which version of the MongoDB server\(aqs API to use for driver connections. .sp The server API type takes a \fI\%mongoc_server_api_version_t\fP\&. It can optionally be strict about the list of allowed commands in that API version, and can also optionally provide errors for deprecated commands in that API version. .sp A \fI\%mongoc_server_api_t\fP can be set on a client, and will then be sent to MongoDB for most commands run using that client. .SS mongoc_server_api_version_t .sp A representation of server API version numbers. .SS Synopsis .sp Used to specify which version of the MongoDB server\(aqs API to use for driver connections. .SS Supported API Versions .sp The driver currently supports the following MongoDB API versions: .TS center; |l|l|. _ T{ Enum value T} T{ MongoDB version string T} _ T{ MONGOC_SERVER_API_V1 T} T{ \(dq1\(dq T} _ .TE .SS mongoc_server_description_t .sp Server description .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_server_description_t mongoc_server_description_t .EE .UNINDENT .UNINDENT .sp \fBmongoc_server_description_t\fP holds information about a mongod or mongos the driver is connected to. .SS Lifecycle .sp Clean up a \fBmongoc_server_description_t\fP with \fI\%mongoc_server_description_destroy()\fP when necessary. .sp Applications receive a temporary reference to a \fBmongoc_server_description_t\fP as a parameter to an SDAM Monitoring callback that must not be destroyed. See \fI\%Introduction to Application Performance Monitoring\fP\&. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_client_get_server_descriptions()\fP\&. .fi .sp .UNINDENT .UNINDENT .SS mongoc_session_opt_t .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_session_opt_t mongoc_session_opt_t; .EE .UNINDENT .UNINDENT .SS Synopsis .sp Start a session with \fI\%mongoc_client_start_session()\fP, use the session for a sequence of operations and multi\-document transactions, then free it with \fI\%mongoc_client_session_destroy()\fP\&. Any \fI\%mongoc_cursor_t\fP or \fI\%mongoc_change_stream_t\fP using a session must be destroyed before the session, and a session must be destroyed before the \fI\%mongoc_client_t\fP it came from. .sp By default, sessions are \fI\%causally consistent\fP\&. To disable causal consistency, before starting a session create a \fI\%mongoc_session_opt_t\fP with \fI\%mongoc_session_opts_new()\fP and call \fI\%mongoc_session_opts_set_causal_consistency()\fP, then free the struct with \fI\%mongoc_session_opts_destroy()\fP\&. .sp Unacknowledged writes are prohibited with sessions. .sp A \fI\%mongoc_client_session_t\fP must be used by only one thread at a time. Due to session pooling, \fI\%mongoc_client_start_session()\fP may return a session that has been idle for some time and is about to be closed after its idle timeout. Use the session within one minute of acquiring it to refresh the session and avoid a timeout. .sp See the example code for \fI\%mongoc_session_opts_set_causal_consistency()\fP\&. .SS mongoc_socket_t .sp Portable socket abstraction .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_socket_t mongoc_socket_t .EE .UNINDENT .UNINDENT .SS Synopsis .sp This structure provides a socket abstraction that is friendlier for portability than BSD sockets directly. Inconsistencies between Linux, various BSDs, Solaris, and Windows are handled here. .SS mongoc_ssl_opt_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct { const char *pem_file; const char *pem_pwd; const char *ca_file; const char *ca_dir; const char *crl_file; bool weak_cert_validation; bool allow_invalid_hostname; void *internal; void *padding[6]; } mongoc_ssl_opt_t; .EE .UNINDENT .UNINDENT .SS Description .sp This structure is used to set the TLS options for a \fI\%mongoc_client_t\fP or \fI\%mongoc_client_pool_t\fP\&. .sp Beginning in version 1.2.0, once a pool or client has any TLS options set, all connections use TLS, even if \fBssl=true\fP is omitted from the MongoDB URI. Before, TLS options were ignored unless \fBtls=true\fP was included in the URI. .sp As of 1.4.0, the \fI\%mongoc_client_pool_set_ssl_opts()\fP and \fI\%mongoc_client_set_ssl_opts()\fP will not only shallow copy the struct, but will also copy the \fBconst char*\fP\&. It is therefore no longer needed to make sure the values remain valid after setting them. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%Configuring TLS\fP .fi .sp .nf \fI\%mongoc_client_set_ssl_opts()\fP .fi .sp .nf \fI\%mongoc_client_pool_set_ssl_opts()\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_stream_buffered_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_stream_buffered_t mongoc_stream_buffered_t; .EE .UNINDENT .UNINDENT .SS Description .sp \fBmongoc_stream_buffered_t\fP should be considered a subclass of \fI\%mongoc_stream_t\fP\&. It performs buffering on an underlying stream. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_stream_buffered_new()\fP .fi .sp .nf \fI\%mongoc_stream_destroy()\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_stream_file_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_stream_file_t mongoc_stream_file_t .EE .UNINDENT .UNINDENT .sp \fBmongoc_stream_file_t\fP is a \fI\%mongoc_stream_t\fP subclass for working with standard UNIX style file\-descriptors. .SS mongoc_stream_socket_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_stream_socket_t mongoc_stream_socket_t .EE .UNINDENT .UNINDENT .sp \fBmongoc_stream_socket_t\fP should be considered a subclass of \fI\%mongoc_stream_t\fP that works upon socket streams. .SS mongoc_stream_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_stream_t mongoc_stream_t .EE .UNINDENT .UNINDENT .sp \fBmongoc_stream_t\fP provides a generic streaming IO abstraction based on a struct of pointers interface. The idea is to allow wrappers, perhaps other language drivers, to easily shim their IO system on top of \fBmongoc_stream_t\fP\&. .sp The API for the stream abstraction is currently private and non\-extensible. .SS Stream Types .sp There are a number of built in stream types that come with mongoc. The default configuration is a buffered unix stream. If TLS is in use, that in turn is wrapped in a tls stream. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_stream_buffered_t\fP .fi .sp .nf \fI\%mongoc_stream_file_t\fP .fi .sp .nf \fI\%mongoc_stream_socket_t\fP .fi .sp .nf \fI\%mongoc_stream_tls_t\fP .fi .sp .UNINDENT .UNINDENT .SS mongoc_stream_tls_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_stream_tls_t mongoc_stream_tls_t .EE .UNINDENT .UNINDENT .sp \fBmongoc_stream_tls_t\fP is a \fI\%mongoc_stream_t\fP subclass for working with TLS streams. .SS mongoc_topology_description_t .sp Status of MongoDB Servers .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_topology_description_t mongoc_topology_description_t; .EE .UNINDENT .UNINDENT .sp \fBmongoc_topology_description_t\fP is an opaque type representing the driver\(aqs knowledge of the MongoDB server or servers it is connected to. Its API conforms to the \fI\%SDAM Monitoring Specification\fP\&. .sp Applications receive a temporary reference to a \fBmongoc_topology_description_t\fP as a parameter to an SDAM Monitoring callback that must not be destroyed. See \fI\%Introduction to Application Performance Monitoring\fP\&. .SS mongoc_transaction_opt_t .INDENT 0.0 .INDENT 3.5 .sp .EX #include typedef struct _mongoc_transaction_opt_t mongoc_transaction_opt_t; .EE .UNINDENT .UNINDENT .SS Synopsis .sp Options for starting a multi\-document transaction. .sp When a session is first created with \fI\%mongoc_client_start_session()\fP, it inherits from the client the read concern, write concern, and read preference with which to start transactions. Each of these fields can be overridden independently. Create a \fI\%mongoc_transaction_opt_t\fP with \fI\%mongoc_transaction_opts_new()\fP, and pass a non\-NULL option to any of the \fI\%mongoc_transaction_opt_t\fP setter functions: .INDENT 0.0 .IP \(bu 2 \fI\%mongoc_transaction_opts_set_read_concern()\fP .IP \(bu 2 \fI\%mongoc_transaction_opts_set_write_concern()\fP .IP \(bu 2 \fI\%mongoc_transaction_opts_set_read_prefs()\fP .UNINDENT .sp Pass the resulting transaction options to \fI\%mongoc_client_session_start_transaction()\fP\&. Each field set in the transaction options overrides the inherited client configuration. .SS Example .sp example\-transaction.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-transaction.c \-o example\-transaction \e * $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) */ /* ./example\-transaction [CONNECTION_STRING] */ #include #include int main (int argc, char *argv[]) { int exit_code = EXIT_FAILURE; mongoc_client_t *client = NULL; mongoc_database_t *database = NULL; mongoc_collection_t *collection = NULL; mongoc_client_session_t *session = NULL; mongoc_session_opt_t *session_opts = NULL; mongoc_transaction_opt_t *default_txn_opts = NULL; mongoc_transaction_opt_t *txn_opts = NULL; mongoc_read_concern_t *read_concern = NULL; mongoc_write_concern_t *write_concern = NULL; const char *uri_string = \(dqmongodb://127.0.0.1/?appname=transaction\-example\(dq; mongoc_uri_t *uri; bson_error_t error; bson_t *doc = NULL; bson_t *insert_opts = NULL; int32_t i; int64_t start; bson_t reply = BSON_INITIALIZER; char *reply_json; bool r; mongoc_init (); if (argc > 1) { uri_string = argv[1]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { MONGOC_ERROR (\(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); goto done; } client = mongoc_client_new_from_uri (uri); if (!client) { goto done; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqexample\-transaction\(dq); /* inserting into a nonexistent collection normally creates it, but a * collection can\(aqt be created in a transaction; create it now */ collection = mongoc_database_create_collection (database, \(dqcollection\(dq, NULL, &error); if (!collection) { /* code 48 is NamespaceExists, see error_codes.err in mongodb source */ if (error.code == 48) { collection = mongoc_database_get_collection (database, \(dqcollection\(dq); } else { MONGOC_ERROR (\(dqFailed to create collection: %s\(dq, error.message); goto done; } } /* a transaction\(aqs read preferences, read concern, and write concern can be * set on the client, on the default transaction options, or when starting * the transaction. for the sake of this example, set read concern on the * default transaction options. */ default_txn_opts = mongoc_transaction_opts_new (); read_concern = mongoc_read_concern_new (); mongoc_read_concern_set_level (read_concern, \(dqsnapshot\(dq); mongoc_transaction_opts_set_read_concern (default_txn_opts, read_concern); session_opts = mongoc_session_opts_new (); mongoc_session_opts_set_default_transaction_opts (session_opts, default_txn_opts); session = mongoc_client_start_session (client, session_opts, &error); if (!session) { MONGOC_ERROR (\(dqFailed to start session: %s\(dq, error.message); goto done; } /* in this example, set write concern when starting the transaction */ txn_opts = mongoc_transaction_opts_new (); write_concern = mongoc_write_concern_new (); mongoc_write_concern_set_wmajority (write_concern, 1000 /* wtimeout */); mongoc_transaction_opts_set_write_concern (txn_opts, write_concern); insert_opts = bson_new (); if (!mongoc_client_session_append (session, insert_opts, &error)) { MONGOC_ERROR (\(dqCould not add session to opts: %s\(dq, error.message); goto done; } retry_transaction: r = mongoc_client_session_start_transaction (session, txn_opts, &error); if (!r) { MONGOC_ERROR (\(dqFailed to start transaction: %s\(dq, error.message); goto done; } /* insert two documents \- on error, retry the whole transaction */ for (i = 0; i < 2; i++) { doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (i)); bson_destroy (&reply); r = mongoc_collection_insert_one ( collection, doc, insert_opts, &reply, &error); bson_destroy (doc); if (!r) { MONGOC_ERROR (\(dqInsert failed: %s\(dq, error.message); mongoc_client_session_abort_transaction (session, NULL); /* a network error, primary failover, or other temporary error in a * transaction includes {\(dqerrorLabels\(dq: [\(dqTransientTransactionError\(dq]}, * meaning that trying the entire transaction again may succeed */ if (mongoc_error_has_label (&reply, \(dqTransientTransactionError\(dq)) { goto retry_transaction; } goto done; } reply_json = bson_as_json (&reply, NULL); printf (\(dq%s\en\(dq, reply_json); bson_free (reply_json); } /* in case of transient errors, retry for 5 seconds to commit transaction */ start = bson_get_monotonic_time (); while (bson_get_monotonic_time () \- start < 5 * 1000 * 1000) { bson_destroy (&reply); r = mongoc_client_session_commit_transaction (session, &reply, &error); if (r) { /* success */ break; } else { MONGOC_ERROR (\(dqWarning: commit failed: %s\(dq, error.message); if (mongoc_error_has_label (&reply, \(dqTransientTransactionError\(dq)) { goto retry_transaction; } else if (mongoc_error_has_label (&reply, \(dqUnknownTransactionCommitResult\(dq)) { /* try again to commit */ continue; } /* unrecoverable error trying to commit */ break; } } exit_code = EXIT_SUCCESS; done: bson_destroy (&reply); bson_destroy (insert_opts); mongoc_write_concern_destroy (write_concern); mongoc_read_concern_destroy (read_concern); mongoc_transaction_opts_destroy (txn_opts); mongoc_transaction_opts_destroy (default_txn_opts); mongoc_client_session_destroy (session); mongoc_collection_destroy (collection); mongoc_database_destroy (database); mongoc_uri_destroy (uri); mongoc_client_destroy (client); mongoc_cleanup (); return exit_code; } .EE .UNINDENT .UNINDENT .SS mongoc_transaction_state_t .sp Constants for transaction states .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_TRANSACTION_NONE = 0, MONGOC_TRANSACTION_STARTING = 1, MONGOC_TRANSACTION_IN_PROGRESS = 2, MONGOC_TRANSACTION_COMMITTED = 3, MONGOC_TRANSACTION_ABORTED = 4, } mongoc_transaction_state_t; .EE .UNINDENT .UNINDENT .SS Description .sp These constants describe the current transaction state of a session. .SS Flag Values .TS center; |l|l|. _ T{ MONGOC_TRANSACTION_NONE T} T{ There is no transaction in progress. T} _ T{ MONGOC_TRANSACTION_STARTING T} T{ A transaction has been started, but no operation has been sent to the server. T} _ T{ MONGOC_TRANSACTION_IN_PROGRESS T} T{ A transaction is in progress. T} _ T{ MONGOC_TRANSACTION_COMMITTED T} T{ The transaction was committed. T} _ T{ MONGOC_TRANSACTION_ABORTED T} T{ The transaction was aborted. T} _ .TE .SS mongoc_update_flags_t .sp Flags for update operations .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef enum { MONGOC_UPDATE_NONE = 0, MONGOC_UPDATE_UPSERT = 1 << 0, MONGOC_UPDATE_MULTI_UPDATE = 1 << 1, } mongoc_update_flags_t; #define MONGOC_UPDATE_NO_VALIDATE (1U << 31) .EE .UNINDENT .UNINDENT .SS Description .sp These flags correspond to the MongoDB wire protocol. They may be bitwise or\(aqd together. The allow for modifying the way an update is performed in the MongoDB server. .SS Flag Values .TS center; |l|l|. _ T{ MONGOC_UPDATE_NONE T} T{ No update flags set. T} _ T{ MONGOC_UPDATE_UPSERT T} T{ If an upsert should be performed. T} _ T{ MONGOC_UPDATE_MULTI_UPDATE T} T{ If more than a single matching document should be updated. By default only the first document is updated. T} _ T{ MONGOC_UPDATE_NO_VALIDATE T} T{ Do not perform client side BSON validations when performing an update. This is useful if you already know your BSON documents are valid. T} _ .TE .SS mongoc_uri_t .SS Synopsis .INDENT 0.0 .INDENT 3.5 .sp .EX typedef struct _mongoc_uri_t mongoc_uri_t; .EE .UNINDENT .UNINDENT .SS Description .sp \fBmongoc_uri_t\fP provides an abstraction on top of the MongoDB connection URI format. It provides standardized parsing as well as convenience methods for extracting useful information such as replica hosts or authorization information. .sp See \fI\%Connection String URI Reference\fP on the MongoDB website for more information. .SS Format .INDENT 0.0 .INDENT 3.5 .sp .EX mongodb[+srv]:// <1> [username:password@] <2> host1 <3> [:port1] <4> [,host2[:port2],...[,hostN[:portN]]] <5> [/[database] <6> [?options]] <7> .EE .UNINDENT .UNINDENT .INDENT 0.0 .IP 1. 3 \(dqmongodb\(dq is the specifier of the MongoDB protocol. Use \(dqmongodb+srv\(dq with a single service name in place of \(dqhost1\(dq to specify the initial list of servers with an SRV record. .IP 2. 3 An optional username and password. .IP 3. 3 The only required part of the uri. This specifies either a hostname, IPv4 address, IPv6 address enclosed in \(dq[\(dq and \(dq]\(dq, or UNIX domain socket. .IP 4. 3 An optional port number. Defaults to :27017. .IP 5. 3 Extra optional hosts and ports. You would specify multiple hosts, for example, for connections to replica sets. .IP 6. 3 The name of the database to authenticate if the connection string includes authentication credentials. If /database is not specified and the connection string includes credentials, defaults to the \(aqadmin\(aq database. .IP 7. 3 Connection specific options. .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Option names are case\-insensitive. Do not repeat the same option (e.g. \(dqmongodb://localhost/db?opt=value1&OPT=value2\(dq) since this may have unexpected results. .UNINDENT .UNINDENT .sp The MongoDB C Driver exposes constants for each supported connection option. These constants make it easier to discover connection options, but their string values can be used as well. .sp For example, the following calls are equal. .INDENT 0.0 .INDENT 3.5 .sp .EX uri = mongoc_uri_new (\(dqmongodb://localhost/?\(dq MONGOC_URI_APPNAME \(dq=applicationName\(dq); uri = mongoc_uri_new (\(dqmongodb://localhost/?appname=applicationName\(dq); uri = mongoc_uri_new (\(dqmongodb://localhost/?appName=applicationName\(dq); .EE .UNINDENT .UNINDENT .SS Replica Set Example .sp To describe a connection to a replica set named \(aqtest\(aq with the following mongod hosts: .INDENT 0.0 .IP \(bu 2 \fBdb1.example.com\fP on port \fB27017\fP .IP \(bu 2 \fBdb2.example.com\fP on port \fB2500\fP .UNINDENT .sp You would use a connection string that resembles the following. .INDENT 0.0 .INDENT 3.5 .sp .EX mongodb://db1.example.com,db2.example.com:2500/?replicaSet=test .EE .UNINDENT .UNINDENT .SS SRV Example .sp If you have configured an \fI\%SRV record\fP with a name like \(dq_mongodb._tcp.server.example.com\(dq whose records are a list of one or more MongoDB server hostnames, use a connection string like this: .INDENT 0.0 .INDENT 3.5 .sp .EX uri = mongoc_uri_new (\(dqmongodb+srv://server.example.com/?replicaSet=rs&appName=applicationName\(dq); .EE .UNINDENT .UNINDENT .sp The driver prefixes the service name with \(dq_mongodb._tcp.\(dq, then performs a DNS SRV query to resolve the service name to one or more hostnames. If this query succeeds, the driver performs a DNS TXT query on the service name (without the \(dq_mongodb._tcp\(dq prefix) for additional URI options configured as TXT records. .sp On Unix, the MongoDB C Driver relies on libresolv to look up SRV and TXT records. If libresolv is unavailable, then using a \(dqmongodb+srv\(dq URI will cause an error. If your libresolv lacks \fBres_nsearch\fP then the driver will fall back to \fBres_search\fP, which is not thread\-safe. .SS IPv4 and IPv6 .sp If connecting to a hostname that has both IPv4 and IPv6 DNS records, the behavior follows \fI\%RFC\-6555\fP\&. A connection to the IPv6 address is attempted first. If IPv6 fails, then a connection is attempted to the IPv4 address. If the connection attempt to IPv6 does not complete within 250ms, then IPv4 is tried in parallel. Whichever succeeds connection first cancels the other. The successful DNS result is cached for 10 minutes. .sp As a consequence, attempts to connect to a mongod only listening on IPv4 may be delayed if there are both A (IPv4) and AAAA (IPv6) DNS records associated with the host. .sp To avoid a delay, configure hostnames to match the MongoDB configuration. That is, only create an A record if the mongod is only listening on IPv4. .SS Connection Options .TS center; |l|l|l|l|. _ T{ Constant T} T{ Key T} T{ Default T} T{ Description T} _ T{ MONGOC_URI_RETRYREADS T} T{ retryreads T} T{ true T} T{ If \(dqtrue\(dq and the server is a MongoDB 3.6+ standalone, replica set, or sharded cluster, the driver safely retries a read that failed due to a network error or replica set failover. T} _ T{ MONGOC_URI_RETRYWRITES T} T{ retrywrites T} T{ true if driver built w/ TLS T} T{ If \(dqtrue\(dq and the server is a MongoDB 3.6+ replica set or sharded cluster, the driver safely retries a write that failed due to a network error or replica set failover. Only inserts, updates of single documents, or deletes of single documents are retried. T} _ T{ MONGOC_URI_APPNAME T} T{ appname T} T{ Empty (no appname) T} T{ The client application name. This value is used by MongoDB when it logs connection information and profile information, such as slow queries. T} _ T{ MONGOC_URI_TLS T} T{ tls T} T{ Empty (not set, same as false) T} T{ {true|false}, indicating if TLS must be used. (See also \fI\%mongoc_client_set_ssl_opts()\fP and \fI\%mongoc_client_pool_set_ssl_opts()\fP\&.) T} _ T{ MONGOC_URI_COMPRESSORS T} T{ compressors T} T{ Empty (no compressors) T} T{ Comma separated list of compressors, if any, to use to compress the wire protocol messages. Snappy, zlib, and zstd are optional build time dependencies, and enable the \(dqsnappy\(dq, \(dqzlib\(dq, and \(dqzstd\(dq values respectively. T} _ T{ MONGOC_URI_CONNECTTIMEOUTMS T} T{ connecttimeoutms T} T{ 10,000 ms (10 seconds) T} T{ This setting applies to new server connections. It is also used as the socket timeout for server discovery and monitoring operations. T} _ T{ MONGOC_URI_SOCKETTIMEOUTMS T} T{ sockettimeoutms T} T{ 300,000 ms (5 minutes) T} T{ The time in milliseconds to attempt to send or receive on a socket before the attempt times out. T} _ T{ MONGOC_URI_REPLICASET T} T{ replicaset T} T{ Empty (no replicaset) T} T{ The name of the Replica Set that the driver should connect to. T} _ T{ MONGOC_URI_ZLIBCOMPRESSIONLEVEL T} T{ zlibcompressionlevel T} T{ \-1 T} T{ When the MONGOC_URI_COMPRESSORS includes \(dqzlib\(dq this options configures the zlib compression level, when the zlib compressor is used to compress client data. T} _ T{ MONGOC_URI_LOADBALANCED T} T{ loadbalanced T} T{ false T} T{ If true, this indicates the driver is connecting to a MongoDB cluster behind a load balancer. T} _ T{ MONGOC_URI_SRVMAXHOSTS T} T{ srvmaxhosts T} T{ 0 T} T{ If zero, the number of hosts in DNS results is unlimited. If greater than zero, the number of hosts in DNS results is limited to being less than or equal to the given value. T} _ .TE .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 Setting any of the *timeoutMS options above to either \fB0\fP or a negative value is discouraged due to unspecified and inconsistent behavior. The \(dqdefault value\(dq historically specified as a fallback for \fB0\fP or a negative value is NOT related to the default values for the *timeoutMS options documented above. The meaning of a timeout of \fB0\fP or a negative value may vary depending on the operation being executed, even when specified by the same URI option. To specify the documented default value for a *timeoutMS option, use the \fIMONGOC_DEFAULT_*\fP constants defined in \fBmongoc\-client.h\fP instead. .UNINDENT .UNINDENT .SS Authentication Options .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_AUTHMECHANISM T} T{ authmechanism T} T{ Specifies the mechanism to use when authenticating as the provided user. See \fI\%Authentication\fP for supported values. T} _ T{ MONGOC_URI_AUTHMECHANISMPROPERTIES T} T{ authmechanismproperties T} T{ Certain authentication mechanisms have additional options that can be configured. These options should be provided as comma separated option_key:option_value pair and provided as authMechanismProperties. Specifying the same option_key multiple times has undefined behavior. T} _ T{ MONGOC_URI_AUTHSOURCE T} T{ authsource T} T{ The authSource defines the database that should be used to authenticate to. It is unnecessary to provide this option the database name is the same as the database used in the URI. T} _ .TE .SS Mechanism Properties .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_CANONICALIZEHOSTNAME T} T{ canonicalizehostname T} T{ Use the canonical hostname of the service, rather than its configured alias, when authenticating with Cyrus\-SASL Kerberos. T} _ T{ MONGOC_URI_GSSAPISERVICENAME T} T{ gssapiservicename T} T{ Use alternative service name. The default is \fBmongodb\fP\&. T} _ .TE .SS TLS Options .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_TLS T} T{ tls T} T{ {true|false}, indicating if TLS must be used. T} _ T{ MONGOC_URI_TLSCERTIFICATEKEYFILE T} T{ tlscertificatekeyfile T} T{ Path to PEM formatted Private Key, with its Public Certificate concatenated at the end. T} _ T{ MONGOC_URI_TLSCERTIFICATEKEYFILEPASSWORD T} T{ tlscertificatekeypassword T} T{ The password, if any, to use to unlock encrypted Private Key. T} _ T{ MONGOC_URI_TLSCAFILE T} T{ tlscafile T} T{ One, or a bundle of, Certificate Authorities whom should be considered to be trusted. T} _ T{ MONGOC_URI_TLSALLOWINVALIDCERTIFICATES T} T{ tlsallowinvalidcertificates T} T{ Accept and ignore certificate verification errors (e.g. untrusted issuer, expired, etc.) T} _ T{ MONGOC_URI_TLSALLOWINVALIDHOSTNAMES T} T{ tlsallowinvalidhostnames T} T{ Ignore hostname verification of the certificate (e.g. Man In The Middle, using valid certificate, but issued for another hostname) T} _ T{ MONGOC_URI_TLSINSECURE T} T{ tlsinsecure T} T{ {true|false}, indicating if insecure TLS options should be used. Currently this implies MONGOC_URI_TLSALLOWINVALIDCERTIFICATES and MONGOC_URI_TLSALLOWINVALIDHOSTNAMES. T} _ T{ MONGOC_URI_TLSDISABLECERTIFICATEREVOCATIONCHECK T} T{ tlsdisablecertificaterevocationcheck T} T{ {true|false}, indicates if revocation checking (CRL / OCSP) should be disabled. T} _ T{ MONGOC_URI_TLSDISABLEOCSPENDPOINTCHECK T} T{ tlsdisableocspendpointcheck T} T{ {true|false}, indicates if OCSP responder endpoints should not be requested when an OCSP response is not stapled. T} _ .TE .sp See \fI\%Configuring TLS\fP for details about these options and about building libmongoc with TLS support. .SS Deprecated SSL Options .sp The following options have been deprecated and may be removed from future releases of libmongoc. .TS center; |l|l|l|l|. _ T{ Constant T} T{ Key T} T{ Deprecated For T} T{ Key T} _ T{ MONGOC_URI_SSL T} T{ ssl T} T{ MONGOC_URI_TLS T} T{ tls T} _ T{ MONGOC_URI_SSLCLIENTCERTIFICATEKEYFILE T} T{ sslclientcertificatekeyfile T} T{ MONGOC_URI_TLSCERTIFICATEKEYFILE T} T{ tlscertificatekeyfile T} _ T{ MONGOC_URI_SSLCLIENTCERTIFICATEKEYPASSWORD T} T{ sslclientcertificatekeypassword T} T{ MONGOC_URI_TLSCERTIFICATEKEYFILEPASSWORD T} T{ tlscertificatekeypassword T} _ T{ MONGOC_URI_SSLCERTIFICATEAUTHORITYFILE T} T{ sslcertificateauthorityfile T} T{ MONGOC_URI_TLSCAFILE T} T{ tlscafile T} _ T{ MONGOC_URI_SSLALLOWINVALIDCERTIFICATES T} T{ sslallowinvalidcertificates T} T{ MONGOC_URI_TLSALLOWINVALIDCERTIFICATES T} T{ tlsallowinvalidcertificates T} _ T{ MONGOC_URI_SSLALLOWINVALIDHOSTNAMES T} T{ sslallowinvalidhostnames T} T{ MONGOC_URI_TLSALLOWINVALIDHOSTNAMES T} T{ tlsallowinvalidhostnames T} _ .TE .SS Server Discovery, Monitoring, and Selection Options .sp Clients in a \fI\%mongoc_client_pool_t\fP share a topology scanner that runs on a background thread. The thread wakes every \fBheartbeatFrequencyMS\fP (default 10 seconds) to scan all MongoDB servers in parallel. Whenever an application operation requires a server that is not known\-\-for example, if there is no known primary and your application attempts an insert\-\-the thread rescans all servers every half\-second. In this situation the pooled client waits up to \fBserverSelectionTimeoutMS\fP (default 30 seconds) for the thread to find a server suitable for the operation, then returns an error with domain \fBMONGOC_ERROR_SERVER_SELECTION\fP\&. .sp Technically, the total time an operation may wait while a pooled client scans the topology is controlled both by \fBserverSelectionTimeoutMS\fP and \fBconnectTimeoutMS\fP\&. The longest wait occurs if the last scan begins just at the end of the selection timeout, and a slow or down server requires the full connection timeout before the client gives up. .sp A non\-pooled client is single\-threaded. Every \fBheartbeatFrequencyMS\fP, it blocks the next application operation while it does a parallel scan. This scan takes as long as needed to check the slowest server: roughly \fBconnectTimeoutMS\fP\&. Therefore the default \fBheartbeatFrequencyMS\fP for single\-threaded clients is greater than for pooled clients: 60 seconds. .sp By default, single\-threaded (non\-pooled) clients scan only once when an operation requires a server that is not known. If you attempt an insert and there is no known primary, the client checks all servers once trying to find it, then succeeds or returns an error with domain \fBMONGOC_ERROR_SERVER_SELECTION\fP\&. But if you set \fBserverSelectionTryOnce\fP to \(dqfalse\(dq, the single\-threaded client loops, checking all servers every half\-second, until \fBserverSelectionTimeoutMS\fP\&. .sp The total time an operation may wait for a single\-threaded client to scan the topology is determined by \fBconnectTimeoutMS\fP in the try\-once case, or \fBserverSelectionTimeoutMS\fP and \fBconnectTimeoutMS\fP if \fBserverSelectionTryOnce\fP is set \(dqfalse\(dq. .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_HEARTBEATFREQUENCYMS T} T{ heartbeatfrequencyms T} T{ The interval between server monitoring checks. Defaults to 10,000ms (10 seconds) in pooled (multi\-threaded) mode, 60,000ms (60 seconds) in non\-pooled mode (single\-threaded). T} _ T{ MONGOC_URI_SERVERSELECTIONTIMEOUTMS T} T{ serverselectiontimeoutms T} T{ A timeout in milliseconds to block for server selection before throwing an exception. The default is 30,0000ms (30 seconds). T} _ T{ MONGOC_URI_SERVERSELECTIONTRYONCE T} T{ serverselectiontryonce T} T{ If \(dqtrue\(dq, the driver scans the topology exactly once after server selection fails, then either selects a server or returns an error. If it is false, then the driver repeatedly searches for a suitable server for up to \fBserverSelectionTimeoutMS\fP milliseconds (pausing a half second between attempts). The default for \fBserverSelectionTryOnce\fP is \(dqfalse\(dq for pooled clients, otherwise \(dqtrue\(dq. Pooled clients ignore serverSelectionTryOnce; they signal the thread to rescan the topology every half\-second until serverSelectionTimeoutMS expires. T} _ T{ MONGOC_URI_SOCKETCHECKINTERVALMS T} T{ socketcheckintervalms T} T{ Only applies to single threaded clients. If a socket has not been used within this time, its connection is checked with a quick \(dqhello\(dq call before it is used again. Defaults to 5,000ms (5 seconds). T} _ T{ MONGOC_URI_DIRECTCONNECTION T} T{ directconnection T} T{ If \(dqtrue\(dq, the driver connects to a single server directly and will not monitor additional servers. If \(dqfalse\(dq, the driver connects based on the presence and value of the \fBreplicaSet\fP option. T} _ .TE .sp Setting any of the *TimeoutMS options above to \fB0\fP will be interpreted as \(dquse the default value\(dq. .SS Connection Pool Options .sp These options govern the behavior of a \fI\%mongoc_client_pool_t\fP\&. They are ignored by a non\-pooled \fI\%mongoc_client_t\fP\&. .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_MAXPOOLSIZE T} T{ maxpoolsize T} T{ The maximum number of clients created by a \fI\%mongoc_client_pool_t\fP total (both in the pool and checked out). The default value is 100. Once it is reached, \fI\%mongoc_client_pool_pop()\fP blocks until another thread pushes a client. T} _ T{ MONGOC_URI_MINPOOLSIZE T} T{ minpoolsize T} T{ Deprecated. This option\(aqs behavior does not match its name, and its actual behavior will likely hurt performance. T} _ T{ MONGOC_URI_MAXIDLETIMEMS T} T{ maxidletimems T} T{ Not implemented. T} _ T{ MONGOC_URI_WAITQUEUEMULTIPLE T} T{ waitqueuemultiple T} T{ Not implemented. T} _ T{ MONGOC_URI_WAITQUEUETIMEOUTMS T} T{ waitqueuetimeoutms T} T{ The maximum time to wait for a client to become available from the pool. T} _ .TE .SS Write Concern Options .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_W T} T{ w T} T{ Determines the write concern (guarantee). Valid values: .INDENT 0.0 .IP \(bu 2 0 = The driver will not acknowledge write operations but will pass or handle any network and socket errors that it receives to the client. If you disable write concern but enable the getLastError command’s w option, w overrides the w option. .IP \(bu 2 1 = Provides basic acknowledgement of write operations. By specifying 1, you require that a standalone mongod instance, or the primary for replica sets, acknowledge all write operations. For drivers released after the default write concern change, this is the default write concern setting. .IP \(bu 2 majority = For replica sets, if you specify the special majority value to w option, write operations will only return successfully after a majority of the configured replica set members have acknowledged the write operation. .IP \(bu 2 n = For replica sets, if you specify a number n greater than 1, operations with this write concern return only after n members of the set have acknowledged the write. If you set n to a number that is greater than the number of available set members or members that hold data, MongoDB will wait, potentially indefinitely, for these members to become available. .IP \(bu 2 tags = For replica sets, you can specify a tag set to require that all members of the set that have these tags configured return confirmation of the write operation. .UNINDENT T} _ T{ MONGOC_URI_WTIMEOUTMS T} T{ wtimeoutms T} T{ The time in milliseconds to wait for replication to succeed, as specified in the w option, before timing out. When wtimeoutMS is 0, write operations will never time out. T} _ T{ MONGOC_URI_JOURNAL T} T{ journal T} T{ Controls whether write operations will wait until the mongod acknowledges the write operations and commits the data to the on disk journal. .INDENT 0.0 .IP \(bu 2 true = Enables journal commit acknowledgement write concern. Equivalent to specifying the getLastError command with the j option enabled. .IP \(bu 2 false = Does not require that mongod commit write operations to the journal before acknowledging the write operation. This is the default option for the journal parameter. .UNINDENT T} _ .TE .SS Read Concern Options .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_READCONCERNLEVEL T} T{ readconcernlevel T} T{ The level of isolation for read operations. If the level is left unspecified, the server default will be used. See \fI\%readConcern in the MongoDB Manual\fP for details. T} _ .TE .SS Read Preference Options .sp When connected to a replica set, the driver chooses which member to query using the read preference: .INDENT 0.0 .IP 1. 3 Choose members whose type matches \(dqreadPreference\(dq. .IP 2. 3 From these, if there are any tags sets configured, choose members matching the first tag set. If there are none, fall back to the next tag set and so on, until some members are chosen or the tag sets are exhausted. .IP 3. 3 From the chosen servers, distribute queries randomly among the server with the fastest round\-trip times. These include the server with the fastest time and any whose round\-trip time is no more than \(dqlocalThresholdMS\(dq slower. .UNINDENT .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_READPREFERENCE T} T{ readpreference T} T{ Specifies the replica set read preference for this connection. This setting overrides any secondaryOk value. The read preference values are the following: .INDENT 0.0 .IP \(bu 2 primary (default) .IP \(bu 2 primaryPreferred .IP \(bu 2 secondary .IP \(bu 2 secondaryPreferred .IP \(bu 2 nearest .UNINDENT T} _ T{ MONGOC_URI_READPREFERENCETAGS T} T{ readpreferencetags T} T{ A representation of a tag set. See also \fI\%Tag Sets\fP\&. T} _ T{ MONGOC_URI_LOCALTHRESHOLDMS T} T{ localthresholdms T} T{ How far to distribute queries, beyond the server with the fastest round\-trip time. By default, only servers within 15ms of the fastest round\-trip time receive queries. T} _ T{ MONGOC_URI_MAXSTALENESSSECONDS T} T{ maxstalenessseconds T} T{ The maximum replication lag, in wall clock time, that a secondary can suffer and still be eligible. The smallest allowed value for maxStalenessSeconds is 90 seconds. T} _ .TE .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 When connecting to more than one mongos, libmongoc\(aqs localThresholdMS applies only to the selection of mongos servers. The threshold for selecting among replica set members in shards is controlled by the \fI\%mongos\(aqs localThreshold command line option\fP\&. .UNINDENT .UNINDENT .SS Legacy Options .sp For historical reasons, the following options are available. They should however not be used. .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_SAFE T} T{ safe T} T{ {true|false} Same as w={1|0} T} _ .TE .SS Version Checks .sp Conditional compilation based on mongoc version .SS Description .sp The following preprocessor macros can be used to perform various checks based on the version of the library you are compiling against. This may be useful if you only want to enable a feature on a certain version of the library. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #define MONGOC_MAJOR_VERSION (x) #define MONGOC_MINOR_VERSION (y) #define MONGOC_MICRO_VERSION (z) #define MONGOC_VERSION_S \(dqx.y.z\(dq #define MONGOC_VERSION_HEX ((1 << 24) | (0 << 16) | (0 << 8) | 0) #define MONGOC_CHECK_VERSION(major, minor, micro) .EE .UNINDENT .UNINDENT .sp Only compile a block on MongoDB C Driver 1.1.0 and newer. .INDENT 0.0 .INDENT 3.5 .sp .EX #if MONGOC_CHECK_VERSION(1, 1, 0) static void do_something (void) { } #endif .EE .UNINDENT .UNINDENT .SS mongoc_write_concern_t .sp Write Concern abstraction .SS Synopsis .sp \fBmongoc_write_concern_t\fP tells the driver what level of acknowledgement to await from the server. The default, MONGOC_WRITE_CONCERN_W_DEFAULT, is right for the great majority of applications. .sp You can specify a write concern on connection objects, database objects, collection objects, or per\-operation. Data\-modifying operations typically use the write concern of the object they operate on, and check the server response for a write concern error or write concern timeout. For example, \fI\%mongoc_collection_drop_index()\fP uses the collection\(aqs write concern, and a write concern error or timeout in the response is considered a failure. .sp Exceptions to this principle are the generic command functions: .INDENT 0.0 .IP \(bu 2 \fI\%mongoc_client_command()\fP .IP \(bu 2 \fI\%mongoc_client_command_simple()\fP .IP \(bu 2 \fI\%mongoc_database_command()\fP .IP \(bu 2 \fI\%mongoc_database_command_simple()\fP .IP \(bu 2 \fI\%mongoc_collection_command()\fP .IP \(bu 2 \fI\%mongoc_collection_command_simple()\fP .UNINDENT .sp These generic command functions do not automatically apply a write concern, and they do not check the server response for a write concern error or write concern timeout. .sp See \fI\%Write Concern\fP on the MongoDB website for more information. .SS Write Concern Levels .sp Set the write concern level with \fI\%mongoc_write_concern_set_w()\fP\&. .TS center; |l|l|. _ T{ MONGOC_WRITE_CONCERN_W_DEFAULT (1) T} T{ By default, writes block awaiting acknowledgement from MongoDB. Acknowledged write concern allows clients to catch network, duplicate key, and other errors. T} _ T{ MONGOC_WRITE_CONCERN_W_UNACKNOWLEDGED (0) T} T{ With this write concern, MongoDB does not acknowledge the receipt of write operation. Unacknowledged is similar to errors ignored; however, mongoc attempts to receive and handle network errors when possible. T} _ T{ MONGOC_WRITE_CONCERN_W_MAJORITY (majority) T} T{ Block until a write has been propagated to a majority of the nodes in the replica set. T} _ T{ n T} T{ Block until a write has been propagated to at least \fBn\fP nodes in the replica set. T} _ .TE .SS Deprecations .sp The write concern \fBMONGOC_WRITE_CONCERN_W_ERRORS_IGNORED\fP (value \-1) is a deprecated synonym for \fBMONGOC_WRITE_CONCERN_W_UNACKNOWLEDGED\fP (value 0), and will be removed in the next major release. .sp \fI\%mongoc_write_concern_set_fsync()\fP is deprecated. .SS Application Performance Monitoring (APM) .sp The MongoDB C Driver allows you to monitor all the MongoDB operations the driver executes. This event\-notification system conforms to two MongoDB driver specs: .INDENT 0.0 .IP \(bu 2 \fI\%Command Logging and Monitoring\fP: events related to all application operations. .IP \(bu 2 \fI\%SDAM Monitoring\fP: events related to the driver\(aqs Server Discovery And Monitoring logic. .UNINDENT .sp To receive notifications, create a \fBmongoc_apm_callbacks_t\fP with \fI\%mongoc_apm_callbacks_new()\fP, set callbacks on it, then pass it to \fI\%mongoc_client_set_apm_callbacks()\fP or \fI\%mongoc_client_pool_set_apm_callbacks()\fP\&. .SS Command\-Monitoring Example .sp example\-command\-monitoring.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-command\-monitoring.c \-o example\-command\-monitoring \e * $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) */ /* ./example\-command\-monitoring [CONNECTION_STRING] */ #include #include typedef struct { int started; int succeeded; int failed; } stats_t; void command_started (const mongoc_apm_command_started_t *event) { char *s; s = bson_as_relaxed_extended_json ( mongoc_apm_command_started_get_command (event), NULL); printf (\(dqCommand %s started on %s:\en%s\en\en\(dq, mongoc_apm_command_started_get_command_name (event), mongoc_apm_command_started_get_host (event)\->host, s); ((stats_t *) mongoc_apm_command_started_get_context (event))\->started++; bson_free (s); } void command_succeeded (const mongoc_apm_command_succeeded_t *event) { char *s; s = bson_as_relaxed_extended_json ( mongoc_apm_command_succeeded_get_reply (event), NULL); printf (\(dqCommand %s succeeded:\en%s\en\en\(dq, mongoc_apm_command_succeeded_get_command_name (event), s); ((stats_t *) mongoc_apm_command_succeeded_get_context (event))\->succeeded++; bson_free (s); } void command_failed (const mongoc_apm_command_failed_t *event) { bson_error_t error; mongoc_apm_command_failed_get_error (event, &error); printf (\(dqCommand %s failed:\en\e\(dq%s\e\(dq\en\en\(dq, mongoc_apm_command_failed_get_command_name (event), error.message); ((stats_t *) mongoc_apm_command_failed_get_context (event))\->failed++; } int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_apm_callbacks_t *callbacks; stats_t stats = {0}; mongoc_collection_t *collection; bson_error_t error; const char *uri_string = \(dqmongodb://127.0.0.1/?appname=cmd\-monitoring\-example\(dq; mongoc_uri_t *uri; const char *collection_name = \(dqtest\(dq; bson_t *docs[2]; mongoc_init (); if (argc > 1) { uri_string = argv[1]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); callbacks = mongoc_apm_callbacks_new (); mongoc_apm_set_command_started_cb (callbacks, command_started); mongoc_apm_set_command_succeeded_cb (callbacks, command_succeeded); mongoc_apm_set_command_failed_cb (callbacks, command_failed); mongoc_client_set_apm_callbacks ( client, callbacks, (void *) &stats /* context pointer */); collection = mongoc_client_get_collection (client, \(dqtest\(dq, collection_name); mongoc_collection_drop (collection, NULL); docs[0] = BCON_NEW (\(dq_id\(dq, BCON_INT32 (0)); docs[1] = BCON_NEW (\(dq_id\(dq, BCON_INT32 (1)); mongoc_collection_insert_many ( collection, (const bson_t **) docs, 2, NULL, NULL, NULL); /* duplicate key error on the second insert */ mongoc_collection_insert_one (collection, docs[0], NULL, NULL, NULL); mongoc_collection_destroy (collection); mongoc_apm_callbacks_destroy (callbacks); mongoc_uri_destroy (uri); mongoc_client_destroy (client); printf (\(dqstarted: %d\ensucceeded: %d\enfailed: %d\en\(dq, stats.started, stats.succeeded, stats.failed); bson_destroy (docs[0]); bson_destroy (docs[1]); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp This example program prints: .INDENT 0.0 .INDENT 3.5 .sp .EX Command drop started on 127.0.0.1: { \(dqdrop\(dq : \(dqtest\(dq } Command drop succeeded: { \(dqns\(dq : \(dqtest.test\(dq, \(dqnIndexesWas\(dq : 1, \(dqok\(dq : 1.0 } Command insert started on 127.0.0.1: { \(dqinsert\(dq : \(dqtest\(dq, \(dqordered\(dq : true, \(dqdocuments\(dq : [ { \(dq_id\(dq : 0 }, { \(dq_id\(dq : 1 } ] } Command insert succeeded: { \(dqn\(dq : 2, \(dqok\(dq : 1.0 } Command insert started on 127.0.0.1: { \(dqinsert\(dq : \(dqtest\(dq, \(dqordered\(dq : true, \(dqdocuments\(dq : [ { \(dq_id\(dq : 0 } ] } Command insert succeeded: { \(dqn\(dq : 0, \(dqwriteErrors\(dq : [ { \(dqindex\(dq : 0, \(dqcode\(dq : 11000, \(dqerrmsg\(dq : \(dqduplicate key\(dq } ], \(dqok\(dq : 1.0 } started: 3 succeeded: 3 failed: 0 .EE .UNINDENT .UNINDENT .sp The output has been edited and formatted for clarity. Depending on your server configuration, messages may include metadata like database name, logical session ids, or cluster times that are not shown here. .sp The final \(dqinsert\(dq command is considered successful, despite the writeError, because the server replied to the overall command with \fB\(dqok\(dq: 1\fP\&. .SS SDAM Monitoring Example .sp example\-sdam\-monitoring.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* gcc example\-sdam\-monitoring.c \-o example\-sdam\-monitoring \e * $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) */ /* ./example\-sdam\-monitoring [CONNECTION_STRING] */ #include #include typedef struct { int server_changed_events; int server_opening_events; int server_closed_events; int topology_changed_events; int topology_opening_events; int topology_closed_events; int heartbeat_started_events; int heartbeat_succeeded_events; int heartbeat_failed_events; } stats_t; static void server_changed (const mongoc_apm_server_changed_t *event) { stats_t *context; const mongoc_server_description_t *prev_sd, *new_sd; context = (stats_t *) mongoc_apm_server_changed_get_context (event); context\->server_changed_events++; prev_sd = mongoc_apm_server_changed_get_previous_description (event); new_sd = mongoc_apm_server_changed_get_new_description (event); printf (\(dqserver changed: %s %s \-> %s\en\(dq, mongoc_apm_server_changed_get_host (event)\->host_and_port, mongoc_server_description_type (prev_sd), mongoc_server_description_type (new_sd)); } static void server_opening (const mongoc_apm_server_opening_t *event) { stats_t *context; context = (stats_t *) mongoc_apm_server_opening_get_context (event); context\->server_opening_events++; printf (\(dqserver opening: %s\en\(dq, mongoc_apm_server_opening_get_host (event)\->host_and_port); } static void server_closed (const mongoc_apm_server_closed_t *event) { stats_t *context; context = (stats_t *) mongoc_apm_server_closed_get_context (event); context\->server_closed_events++; printf (\(dqserver closed: %s\en\(dq, mongoc_apm_server_closed_get_host (event)\->host_and_port); } static void topology_changed (const mongoc_apm_topology_changed_t *event) { stats_t *context; const mongoc_topology_description_t *prev_td; const mongoc_topology_description_t *new_td; mongoc_server_description_t **prev_sds; size_t n_prev_sds; mongoc_server_description_t **new_sds; size_t n_new_sds; size_t i; mongoc_read_prefs_t *prefs; context = (stats_t *) mongoc_apm_topology_changed_get_context (event); context\->topology_changed_events++; prev_td = mongoc_apm_topology_changed_get_previous_description (event); prev_sds = mongoc_topology_description_get_servers (prev_td, &n_prev_sds); new_td = mongoc_apm_topology_changed_get_new_description (event); new_sds = mongoc_topology_description_get_servers (new_td, &n_new_sds); printf (\(dqtopology changed: %s \-> %s\en\(dq, mongoc_topology_description_type (prev_td), mongoc_topology_description_type (new_td)); if (n_prev_sds) { printf (\(dq previous servers:\en\(dq); for (i = 0; i < n_prev_sds; i++) { printf (\(dq %s %s\en\(dq, mongoc_server_description_type (prev_sds[i]), mongoc_server_description_host (prev_sds[i])\->host_and_port); } } if (n_new_sds) { printf (\(dq new servers:\en\(dq); for (i = 0; i < n_new_sds; i++) { printf (\(dq %s %s\en\(dq, mongoc_server_description_type (new_sds[i]), mongoc_server_description_host (new_sds[i])\->host_and_port); } } prefs = mongoc_read_prefs_new (MONGOC_READ_SECONDARY); /* it is safe, and unfortunately necessary, to cast away const here */ if (mongoc_topology_description_has_readable_server ( (mongoc_topology_description_t *) new_td, prefs)) { printf (\(dq secondary AVAILABLE\en\(dq); } else { printf (\(dq secondary UNAVAILABLE\en\(dq); } if (mongoc_topology_description_has_writable_server ( (mongoc_topology_description_t *) new_td)) { printf (\(dq primary AVAILABLE\en\(dq); } else { printf (\(dq primary UNAVAILABLE\en\(dq); } mongoc_read_prefs_destroy (prefs); mongoc_server_descriptions_destroy_all (prev_sds, n_prev_sds); mongoc_server_descriptions_destroy_all (new_sds, n_new_sds); } static void topology_opening (const mongoc_apm_topology_opening_t *event) { stats_t *context; context = (stats_t *) mongoc_apm_topology_opening_get_context (event); context\->topology_opening_events++; printf (\(dqtopology opening\en\(dq); } static void topology_closed (const mongoc_apm_topology_closed_t *event) { stats_t *context; context = (stats_t *) mongoc_apm_topology_closed_get_context (event); context\->topology_closed_events++; printf (\(dqtopology closed\en\(dq); } static void server_heartbeat_started (const mongoc_apm_server_heartbeat_started_t *event) { stats_t *context; context = (stats_t *) mongoc_apm_server_heartbeat_started_get_context (event); context\->heartbeat_started_events++; printf (\(dq%s heartbeat started\en\(dq, mongoc_apm_server_heartbeat_started_get_host (event)\->host_and_port); } static void server_heartbeat_succeeded ( const mongoc_apm_server_heartbeat_succeeded_t *event) { stats_t *context; char *reply; context = (stats_t *) mongoc_apm_server_heartbeat_succeeded_get_context (event); context\->heartbeat_succeeded_events++; reply = bson_as_canonical_extended_json ( mongoc_apm_server_heartbeat_succeeded_get_reply (event), NULL); printf ( \(dq%s heartbeat succeeded: %s\en\(dq, mongoc_apm_server_heartbeat_succeeded_get_host (event)\->host_and_port, reply); bson_free (reply); } static void server_heartbeat_failed (const mongoc_apm_server_heartbeat_failed_t *event) { stats_t *context; bson_error_t error; context = (stats_t *) mongoc_apm_server_heartbeat_failed_get_context (event); context\->heartbeat_failed_events++; mongoc_apm_server_heartbeat_failed_get_error (event, &error); printf (\(dq%s heartbeat failed: %s\en\(dq, mongoc_apm_server_heartbeat_failed_get_host (event)\->host_and_port, error.message); } int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_apm_callbacks_t *cbs; stats_t stats = {0}; const char *uri_string = \(dqmongodb://127.0.0.1/?appname=sdam\-monitoring\-example\(dq; mongoc_uri_t *uri; bson_t cmd = BSON_INITIALIZER; bson_t reply; bson_error_t error; mongoc_init (); if (argc > 1) { uri_string = argv[1]; } uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); cbs = mongoc_apm_callbacks_new (); mongoc_apm_set_server_changed_cb (cbs, server_changed); mongoc_apm_set_server_opening_cb (cbs, server_opening); mongoc_apm_set_server_closed_cb (cbs, server_closed); mongoc_apm_set_topology_changed_cb (cbs, topology_changed); mongoc_apm_set_topology_opening_cb (cbs, topology_opening); mongoc_apm_set_topology_closed_cb (cbs, topology_closed); mongoc_apm_set_server_heartbeat_started_cb (cbs, server_heartbeat_started); mongoc_apm_set_server_heartbeat_succeeded_cb (cbs, server_heartbeat_succeeded); mongoc_apm_set_server_heartbeat_failed_cb (cbs, server_heartbeat_failed); mongoc_client_set_apm_callbacks ( client, cbs, (void *) &stats /* context pointer */); /* the driver connects on demand to perform first operation */ BSON_APPEND_INT32 (&cmd, \(dqbuildinfo\(dq, 1); mongoc_client_command_simple (client, \(dqadmin\(dq, &cmd, NULL, &reply, &error); mongoc_uri_destroy (uri); mongoc_client_destroy (client); printf (\(dqEvents:\en\(dq \(dq server changed: %d\en\(dq \(dq server opening: %d\en\(dq \(dq server closed: %d\en\(dq \(dq topology changed: %d\en\(dq \(dq topology opening: %d\en\(dq \(dq topology closed: %d\en\(dq \(dq heartbeat started: %d\en\(dq \(dq heartbeat succeeded: %d\en\(dq \(dq heartbeat failed: %d\en\(dq, stats.server_changed_events, stats.server_opening_events, stats.server_closed_events, stats.topology_changed_events, stats.topology_opening_events, stats.topology_closed_events, stats.heartbeat_started_events, stats.heartbeat_succeeded_events, stats.heartbeat_failed_events); bson_destroy (&cmd); bson_destroy (&reply); mongoc_apm_callbacks_destroy (cbs); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Start a 3\-node replica set on localhost with set name \(dqrs\(dq and start the program: .INDENT 0.0 .INDENT 3.5 .sp .EX \&./example\-sdam\-monitoring \(dqmongodb://localhost:27017,localhost:27018/?replicaSet=rs\(dq .EE .UNINDENT .UNINDENT .sp This example program prints something like: .INDENT 0.0 .INDENT 3.5 .sp .EX topology opening topology changed: Unknown \-> ReplicaSetNoPrimary secondary UNAVAILABLE primary UNAVAILABLE server opening: localhost:27017 server opening: localhost:27018 localhost:27017 heartbeat started localhost:27018 heartbeat started localhost:27017 heartbeat succeeded: { ... reply ... } server changed: localhost:27017 Unknown \-> RSPrimary server opening: localhost:27019 topology changed: ReplicaSetNoPrimary \-> ReplicaSetWithPrimary new servers: RSPrimary localhost:27017 secondary UNAVAILABLE primary AVAILABLE localhost:27019 heartbeat started localhost:27018 heartbeat succeeded: { ... reply ... } server changed: localhost:27018 Unknown \-> RSSecondary topology changed: ReplicaSetWithPrimary \-> ReplicaSetWithPrimary previous servers: RSPrimary localhost:27017 new servers: RSPrimary localhost:27017 RSSecondary localhost:27018 secondary AVAILABLE primary AVAILABLE localhost:27019 heartbeat succeeded: { ... reply ... } server changed: localhost:27019 Unknown \-> RSSecondary topology changed: ReplicaSetWithPrimary \-> ReplicaSetWithPrimary previous servers: RSPrimary localhost:27017 RSSecondary localhost:27018 new servers: RSPrimary localhost:27017 RSSecondary localhost:27018 RSSecondary localhost:27019 secondary AVAILABLE primary AVAILABLE topology closed Events: server changed: 3 server opening: 3 server closed: 0 topology changed: 4 topology opening: 1 topology closed: 1 heartbeat started: 3 heartbeat succeeded: 3 heartbeat failed: 0 .EE .UNINDENT .UNINDENT .sp The driver connects to the mongods on ports 27017 and 27018, which were specified in the URI, and determines which is primary. It also discovers the third member, \(dqlocalhost:27019\(dq, and adds it to the topology. .INDENT 0.0 .IP \(bu 2 \fI\%Index\fP .UNINDENT .SH AUTHOR MongoDB, Inc .SH COPYRIGHT 2017-present, MongoDB, Inc .\" Generated by docutils manpage writer. .