VCH Deployment Options
The command line utility for vSphere Integrated Containers Engine, vic-machine
, provides a create
command with options that allow you to customize the deployment of virtual container hosts (VCHs) to correspond to your vSphere environment.
- vSphere Target Options
- Security Options
- Private Registry Options
- Datastore Options
- Networking Options
- General Deployment Options
To allow you to fine-tune the deployment of VCHs, vic-machine create
provides Advanced Options.
- Options for Specifying a Static IP Address for the VCH Endpoint VM
- Options for Configuring a Non-DHCP Network for Container Traffic
- Options to Configure VCHs to Use Proxy Servers
- Advanced Resource Management Options
- Other Advanced Options
vSphere Target Options
The create
command of the vic-machine
utility requires you to provide information about where in your vSphere environment to deploy the VCH and the vCenter Server or ESXi user account to use.
--target
Short name: -t
The IPv4 address, fully qualified domain name (FQDN), or URL of the ESXi host or vCenter Server instance on which you are deploying a VCH. This option is always mandatory.
To facilitate IP address changes in your infrastructure, provide an FQDN whenever possible, rather than an IP address. If vic-machine create
cannot resolve the FQDN, it fails with an error.
- If the target ESXi host is not managed by vCenter Server, provide the address of the ESXi host.
--target esxi_host_address
- If the target ESXi host is managed by vCenter Server, or if you are deploying to a cluster, provide the address of vCenter Server.
--target vcenter_server_address
You can include the user name and password in the target URL. If you are deploying a VCH on vCenter Server, specify the username for an account that has the Administrator role on that vCenter Server instance.
--target vcenter_or_esxi_username:password@vcenter_or_esxi_address
Wrap the user name or password in single quotes (Linux or Mac OS) or double quotes (Windows) if they include special characters.
'vcenter_or_esxi_usern@me':'p@ssword'@vcenter_or_esxi_address
If you do not include the user name in the target URL, you must specify the
user
option. If you do not specify thepassword
option or include the password in the target URL,vic-machine create
prompts you to enter the password.You can configure a VCH so that it uses a non-administrator account for post-deployment operations by specifying the
--ops-user
option.If you are deploying a VCH on a vCenter Server instance that includes more than one datacenter, include the datacenter name in the target URL. If you include an invalid datacenter name,
vic-machine create
fails and suggests the available datacenters that you can specify.--target vcenter_server_address/datacenter_name
Wrap the datacenter name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes spaces.
--target vcenter_server_address/'datacenter name'
--user
Short name: -u
The username for the ESXi host or vCenter Server instance on which you are deploying a VCH.
If you are deploying a VCH on vCenter Server, specify a username for an account that has the Administrator role on that vCenter Server instance.
--user esxi_or_vcenter_server_username
Wrap the user name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes special characters.
--user 'esxi_or_vcenter_server_usern@me'
You can specify the username in the URL that you pass to vic-machine create
in the target
option, in which case the user
option is not required.
You can configure a VCH so that it uses a non-administrator account for post-deployment operations by specifying the --ops-user
option.
--password
Short name: -p
The password for the user account on the vCenter Server on which you are deploying the VCH, or the password for the ESXi host if you are deploying directly to an ESXi host. If not specified, vic-machine
prompts you to enter the password during deployment.
--password esxi_host_or_vcenter_server_password
Wrap the password in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes special characters.
--password 'esxi_host_or_vcenter_server_p@ssword'
You can also specify the username and password in the URL that you pass to vic-machine create
in the target
option, in which case the password
option is not required.
--compute-resource
Short name: -r
The host, cluster, or resource pool in which to deploy the VCH.
If the vCenter Server instance on which you are deploying a VCH only includes a single instance of a standalone host or cluster, vic-machine create
automatically detects and uses those resources. In this case, you do not need to specify a compute resource when you run vic-machine create
. If you are deploying the VCH directly to an ESXi host and you do not use --compute-resource
to specify a resource pool, vic-machine create
automatically uses the default resource pool.
You specify the --compute-resource
option in the following circumstances:
- A vCenter Server instance includes multiple instances of standalone hosts or clusters, or a mixture of standalone hosts and clusters.
- You want to deploy the VCH to a specific resource pool in your environment.
If you do not specify the --compute-resource
option and multiple possible resources exist, or if you specify an invalid resource name, vic-machine create
fails and suggests valid targets for --compute-resource
in the failure message.
- To deploy to a specific resource pool on an ESXi host that is not managed by vCenter Server, specify the name of the resource pool:
--compute-resource resource_pool_name
- To deploy to a vCenter Server instance that has multiple standalone hosts that are not part of a cluster, specify the IPv4 address or fully qualified domain name (FQDN) of the target host:
--compute-resource host_address
- To deploy to a vCenter Server with multiple clusters, specify the name of the target cluster:
--compute-resource cluster_name
- To deploy to a specific resource pool on a standalone host that is managed by vCenter Server, or to a specific resource pool in a cluster, if the resource pool name is unique across all hosts and clusters, specify the name of the resource pool:
--compute-resource resource_pool_name
- To deploy to a specific resource pool on a standalone host that is managed by vCenter Server, if the resource pool name is not unique across all hosts, specify the IPv4 address or FQDN of the target host and name of the resource pool:
--compute-resource host_name/resource_pool_name
- To deploy to a specific resource pool in a cluster, if the resource pool name is not unique across all clusters, specify the full path to the resource pool:
--compute-resource cluster_name/Resources/resource_pool_name
- Wrap resource names in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if they include spaces:
--compute-resource 'resource pool name'
--compute-resource 'cluster name'/Resources/'resource pool name'
--thumbprint
Short name: None
The thumbprint of the vCenter Server or ESXi host certificate. Specify this option if your vSphere environment uses untrusted, self-signed certificates. If your vSphere environment uses trusted certificates that are signed by a known Certificate Authority (CA), you do not need to specify the --thumbprint
option.
NOTE If your vSphere environment uses untrusted, self-signed certificates, you can run vic-machine create
without the --thumbprint
option by using the --force
option. However, running vic-machine create
with the --force
option rather than providing the certificate thumbprint is not recommended, because it permits man-in-the-middle attacks to go undetected.
To obtain the thumbprint of the vCenter Server or ESXi host certificate, run vic-machine create
without the specifying the --thumbprint
or --force
options. The deployment of the VCH fails, but the resulting error message includes the required certificate thumbprint. You can copy the thumbprint from the error message and run vic-machine create again, including the --thumbprint
option.
NOTE: If you obtain the thumbprint by other means, use upper-case letters and colon delimitation in the thumbprint. Do not use space delimitation.
--thumbprint certificate_thumbprint
Security Options
The security options that vic-machine create
provides allow for 3 broad categories of security:
- Restrict access to the Docker API with Auto-Generated Certificates
- Restrict access to the Docker API with Custom Certificates
- Do Not Restrict Access to the Docker API
You can also configure a VCH to use different user accounts for deployment and operation.
NOTE: Certain options in this section are exposed in the vic-machine create
help if you run vic-machine create --extended-help
, or vic-machine create -x
.
Restrict Access to the Docker API with Auto-Generated Certificates
As a convenience, vic-machine create
provides the option of generating a client certificate, server certificate, and certificate authority (CA) as appropriate when you deploy a VCH. The generated certificates are functional, but they do not allow for fine control over aspects such as expiration, intermediate certificate authorities, and so on.
vSphere Integrated Containers Engine authenticates Docker API clients by using client certificates. This configuration is commonly referred to as tlsverify
in documentation about containers and Docker. A client certificate is accepted if it is signed by a CA that you provide by specifying one or more instances of the --tls-ca
option. In the case of the certificates that vic-machine create
generates, vic-machine create
creates a CA and uses it to create and sign a single client certificate.
When using the Docker client, the client validates the server either by using CAs that are present in the root certificate bundle of the client system, or that are provided explicitly by using the --tlscacert
option when running Docker commands. As a part of this validation, the server certificate must explicitly state at least one of the following, and must match the name or address that the client uses to access the server:
- The FQDN used to communicate with the server
- The IP address used to communicate with the server
- A wildcard domain that matches all of the FQDNs in a specific subdomain. For an example of a domain wildcard, see https://en.wikipedia.org/wiki/Wildcard_certificate#Example.
--tls-cname
Short name: None
The FQDN or IP address to embed in an auto-generated server certificate. Specify an FQDN, IP address, or a domain wildcard. If you provide a custom server certificate by using the --cert
option, you can use --tls-cname
as a sanity check to ensure that the certificate is valid for the deployment.
If you do not specify --tls-cname
but you do set a static address for the VCH on the client network interface, vic-machine create
uses that address for the Common Name, with the same results as if you had specified --tls-cname=x.x.x.x
. For information about setting a static IP address on the client network, see Options for Specifying a Static IP Address for the VCH Endpoint VM.
When you specify the --tls-cname
option, vic-machine create
performs the following actions during the deployment of the VCH:
- Checks for an existing certificate in either a folder that has the same name as the VCH that you are deploying, or in a location that you specify in the
--cert-path
option. If a valid certificate exists that includes the same Common Name attribute as the one that you specify in--tls-cname
,vic-machine create
reuses it. Reusing certificates allows you to delete and recreate VCHs for which you have already distributed the certificates to container developers. - If certificates are present in the certificate folder that include a different Common Name attribute to the one that you specify in
--tls-cname
,vic-machine create
fails. - If a certificate folder does not exist,
vic-machine create
creates a folder with the same name as the VCH, or creates a folder in the location that you specify in the--cert-path
option. - If valid certificates do not already exist,
vic-machine create
creates the following trusted CA, server, and client certificate/key pairs in the certificate folder:ca.pem
ca-key.pem
cert.pem
key.pem
server-cert.pem
server-key.pem
- Creates a browser-friendly PFX client certificate,
cert.pfx
, to use to authenticate connections to the VCH Admin portal for the VCH.
NOTE: The folder and file permissions for the generated certificate and key are readable only by the user who created them.
Running vic-machine create
with the --tls-cname
option also creates an environment file named vch_name.env
, that contains Docker environment variables that container developers can use to configure their Docker client environment:
- Activates TLS client verification.
DOCKER_TLS_VERIFY=1
- The path to the client certificates.
DOCKER_CERT_PATH=path_to_certs
- The address of the VCH.
DOCKER_HOST=vch_address:2376
You must provide copies of the cert.pem
and key.pem
client certificate files and the environment file to container developers so that they can connect Docker clients to the VCH. If you deploy the VCH with the --tls-cname
option, container developers must configure the client appropriately with one of the following options:
- By using the following
tlsverify
,tlscert
, andtlskey
Docker options, addingtlscacert
if a custom CA was used to sign the server certificate. - By setting
DOCKER_CERT_PATH=/path/to/client/cert.pem
andDOCKER_TLS_VERIFY=1
.
--tls-cname vch-name.example.org
--tls-cname *.example.org
--cert-path
Short name: none
By default --cert-path
is a folder in the current directory, that takes its name from the VCH name that you specify in the --name
option. vic-machine create
checks in --cert-path
for existing certificates with the standard names and uses those certificates if they are present:
server-cert.pem
server-key.pem
ca.pem
If vic-machine create
does not find existing certificates with the standard names in --cert-path
, or if you do not specify certificates directly by using the --cert
, --key
, and --tls-ca
options, vic-machine create
generates certificates. Generated certificates are saved in the --cert-path
folder with the standard names listed. vic-machine create
additionally generates other certificates:
cert.pem
andkey.pem
for client certificates, if required.ca-key.pem
, the private key for the certificate authority.
--cert-path 'path_to_certificate_folder'
--certificate-key-size
Short name: --ksz
The size of the key for vic-machine create
to use when it creates auto-generated trusted certificates. You can optionally use --certificate-key-size
if you specify --tls-cname
. If not specified, vic-machine create
creates keys with default size of 2048 bits. It is not recommended to use key sizes of less than 2048 bits.
--certificate-key-size 3072
--organization
Short name: None
A list of identifiers to record in certificates generated by vic-machine
. You can optionally use --organization
if you specify --tls-cname
. If not specified,vic-machine create
uses the name of the VCH as the organization value.
NOTE: The client-ip-address
is used for CommonName
but not for Organisation
.
--organization organization_name
Restrict Access to the Docker API with Custom Certificates
To exercise fine control over the certificates that VCHs use, obtain or generate custom certificates yourself before you deploy a VCH. Use the --key
, --cert
, and --tls-ca
options to pass the custom certificates to vic-machine create
.
--cert
Short name: none
The path to a custom X.509 server certificate. This certificate identifies the VCH endpoint VM both to Docker clients and to browsers that connect to the VCH Admin portal.
- This certificate should have the following certificate usages:
KeyEncipherment
DigitalSignature
KeyAgreement
ServerAuth
- This option is mandatory if you use custom TLS certificates, rather than auto-generated certificates.
- Use this option in combination with the
--key
option, that provides the path to the private key file for the custom certificate. - Include the names of the certificate and key files in the paths.
- If you use trusted custom certificates, container developers run Docker commands with the
--tlsverify
,--tlscacert
,--tlscert
, and--tlskey
options.
--cert path_to_certificate_file/certificate_file_name.pem --key path_to_key_file/key_file_name.pem
Wrap the folder names in the paths in single quotes (Linux or Mac OS) or double quotes (Windows) if they include spaces.
--cert 'path to certificate file'/certificate_file_name.pem --key 'path to key file'/key_file_name.pem
--key
Short name: none
The path to the private key file to use with a custom server certificate. This option is mandatory if you specify the --cert
option, that provides the path to a custom X.509 certificate file. Include the names of the certificate and key files in the paths.
IMPORTANT: The key must not be encrypted.
--cert path_to_certificate_file/certificate_file_name.pem --key path_to_key_file/key_file_name.pem
Wrap the folder names in the paths in single quotes (Linux or Mac OS) or double quotes (Windows) if they include spaces.
--cert 'path to certificate file'/certificate_file_name.pem --key 'path to key file'/key_file_name.pem
--tls-ca
Short name: --ca
You can specify --tls-ca
multiple times, to point vic-machine create
to a file that contains the public portion of a CA. vic-machine create
uses these CAs to validate client certificates that are offered as credentials for Docker API access. This does not need to be the same CA that you use to sign the server certificate.
--tls-ca path_to_ca_file
NOTE: The --tls-ca
option appears in the extended help that you see by running vic-machine-os create --extended-help
or vic-machine-os create -x
.
Do Not Restrict Access to the Docker API
To deploy a VCH that does not restrict access to the Docker API, use the --no-tlsverify
option. To completely disable TLS authentication, use the --no-tls
option.
--no-tlsverify
Short name: --kv
The --no-tlsverify
option prevents the use of CAs for client authentication. You still require a server certificate if you use --no-tlsverify
. You can still supply a custom server certificate by using the --cert
and --key
options. If you do not use --cert
and --key
to supply a custom server certificate, vic-machine create
generates a self-signed server certificate. If you specify --no-tlsverify
there is no access control, however connections remain encrypted.
When you specify the --no-tlsverify
option, vic-machine create
performs the following actions during the deployment of the VCH.
- Generates a self-signed server certificate if you do not specify
--cert
and--key
. - Creates a folder with the same name as the VCH in the location in which you run
vic-machine create
. - Creates an environment file named
vch_name.env
in that folder, that contains theDOCKER_HOST=vch_address
environment variable, that you can provide to container developers to use to set up their Docker client environment.
If you deploy a VCH with the --no-tlsverify
option, container developers run Docker commands with the --tls
option, and the DOCKER_TLS_VERIFY
environment variable must not be set. Note that setting DOCKER_TLS_VERIFY
to 0 or false
has no effect.
The --no-tlsverify
option takes no arguments.
--no-tlsverify
--no-tls
Short name: -k
Disables TLS authentication of connections between the Docker client and the VCH. VCHs use neither client nor server certificates.
Set the no-tls
option if you do not require TLS authentication between the VCH and the Docker client. Any Docker client can connect to the VCH if you disable TLS authentication and connections are not encrypted.
If you use the no-tls
option, container developers connect Docker clients to the VCH via port 2375, instead of via port 2376.
--no-tls
Specify Different User Accounts for VCH Deployment and Operation
Because deploying a VCH requires greater levels of permissions than running a VCH, you can configure a VCH so that it uses different user accounts for deployment and for operation. In this way, you can limit the day-to-day operation of a VCH to an account that does not have full administrator permissions on the target vCenter Server.
--ops-user
Short name: None
A vSphere user account with which the VCH runs after deployment. If not specified, the VCH runs with the vSphere Administrator credentials with which you deploy the VCH, that you specify in either --target
or --user
.
--ops-user user_name
Wrap the user name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes special characters.
--ops-user 'user_n@me'
The user account that you specify in --ops-user
must exist before you deploy the VCH. For information about the permissions that the --ops-user
account requires, see Use Different User Accounts for VCH Deployment and Operation.
--ops-password
Short name: None
The password or token for the operations user that you specify in --ops-user
. If not specified, vic-machine create
prompts you to enter the password for the --ops-user
account.
--ops-password password
Wrap the password in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes special characters.
--ops-password 'p@ssword'
Private Registry Options
If you use vSphere Integrated Containers Registry, or if container developers need to access Docker images that are stored in other private registry servers, you must configure VCHs to allow them to connect to these private registry servers when you deploy the VCHs. VCHs can connect to both secure and insecure private registry servers.
--registry-ca
Short name: --rc
The path to a CA certificate that can validate the server certificate of a private registry. You can specify --registry-ca
multiple times to specify multiple CA certificates for different registries. This allows a VCH to connect to multiple registries.
The use of registry certificates is independent of the Docker client security options that you specify. For example, it is possible to use the --no-tls
option to disable TLS authentication between Docker clients and the VCH, and to use the --registry-ca
option to enable TLS authentication between the VCH and a private registry.
You must use this option to allow a VCH to connect to vSphere Integrated Containers Registry. For information about how to obtain the CA certificate from vSphere Integrated Containers Registry, see Deploy a VCH for Use with vSphere Integrated Containers Registry.
--registry-ca path_to_ca_cert_1 --registry-ca path_to_ca_cert_2
NOTE: The --registry-ca
option appears in the extended help that you see by running vic-machine-os create --extended-help
or vic-machine-os create -x
.
--insecure-registry
Short name: --dir
If you set the --insecure-registry
option, the VCH does not verify the certificate of that registry when it pulls images. Insecure private registries are not recommended in production environments.
If you authorize a VCH to connect to an insecure private registry server, the VCH attempts to access the registry server via HTTP if access via HTTPS fails. VCHs always use HTTPS when connecting to registry servers for which you have not authorized insecure access.
You can specify --insecure-registry
multiple times if multiple insecure registries are permitted. If the registry server listens on a specific port, add the port number to the URL
--insecure-registry registry_URL_1 --insecure-registry registry_URL_2:port_number
Datastore Options
The vic-machine
utility allows you to specify the datastore in which to store container image files, container VM files, and the files for the VCH. You can also specify datastores in which to create container volumes.
- vSphere Integrated Containers Engine fully supports VMware vSAN datastores.
- vSphere Integrated Containers Engine supports all alphanumeric characters, hyphens, and underscores in datastore paths and datastore names, but no other special characters.
- If you specify different datastores in the different datastore options, and if no single host in a cluster can access all of those datastores,
vic-machine create
fails with an error.No single host can access all of the requested datastores. Installation cannot continue.
- If you specify different datastores in the different datastore options, and if only one host in a cluster can access all of them,
vic-machine create
succeeds with a warning.Only one host can access all of the image/container/volume datastores. This may be a point of contention/performance degradation and HA/DRS may not work as intended.
- VCHs do not support datastore name changes. If a datastore changes name after you have deployed a VCH that uses that datastore, that VCH will no longer function.
--image-store
Short name: -i
The datastore in which to store container image files, container VM files, and the files for the VCH. The --image-store
option is mandatory if there is more than one datastore in your vSphere environment. If there is only one datastore in your vSphere environment, the --image-store
option is not required.
If you do not specify the --image-store
option and multiple possible datastores exist, or if you specify an invalid datastore name, vic-machine create
fails and suggests valid datastores in the failure message.
If you are deploying the VCH to a vCenter Server cluster, the datastore that you designate in the image-store
option must be shared by at least two ESXi hosts in the cluster. Using non-shared datastores is possible, but limits the use of vSphere features such as vSphere vMotion® and VMware vSphere Distributed Resource Scheduler™ (DRS).
To specify a whole datastore as the image store, specify the datastore name in the --image-store
option:
--image-store datastore_name
If you designate a whole datastore as the image store, vic-machine
creates the following set of folders in the target datastore:
datastore_name/VIC/vch_uuid/images
, in which to store all of the container images that you pull into the VCH.datastore_name/vch_name
, that contains the VM files for the VCH.datastore_name/vch_name/kvstores
, a key-value store folder for the VCH.
You can specify a datastore folder to use as the image store by specifying a path in the --image-store
option</code>:
--image-store datastore_name/path
If the folder that you specify in /path
does not already exist, vic-machine create
creates it. Wrap the datastore name and path in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if they include spaces:
--image-store 'datastore name'/'datastore path'
If you designate a datastore folder as the image store, vic-machine
creates the following set of folders in the target datastore:
datastore_name/path/VIC/vcu_uuid/images
, in which to store all of the container images that you pull into the VCH.datastore_name/vch_name
, that contains the VM files for the VCH. This is the same as if you specified a datastore as the image store.datastore_name/vch_name/kvstores
, a key-value store folder for the VCH. This is the same as if you specified a datastore as the image store.
By specifying the path to a datastore folder in the --image-store
option, you can designate the same datastore folder as the image store for multiple VCHs. In this way, vic-machine create
creates only one VIC
folder in the datastore, at the path that you specify. The VIC
folder contains one vch_uuid/images
folder for each VCH that you deploy. By creating one vch_uuid/images
folder for each VCH, vSphere Integrated Containers Engine limits the potential for conflicts of image use between VCHs, even if you share the same image store folder between multiple hosts.
When container developers create containers, vSphere Integrated Containers Engine stores the files for container VMs at the top level of the image store, in folders that have the same name as the containers.
--volume-store
Short name: --vs
The datastore in which to create volumes when container developers use the docker volume create
or docker create -v
commands. When you specify the volume-store
option, you provide the name of the target datastore and a label for the volume store. You can optionally provide a path to a specific folder in the datastore in which to create the volume store. If the folders that you specify in the path do not already exist on the datastore, vic-machine create
creates the appropriate folder structure.
The vic-machine create
creates command creates the volumes
folder independently from the folders for VCH files so that you can share volumes between VCHs. If you delete a VCH, any volumes that the VCH managed will remain available in the volume store unless you specify the --force
option when you delete the VCH. You can then assign an existing volume store that already contains data to a newly created VCH.
IMPORTANT: If multiple VCHs will use the same datastore for their volume stores, specify a different datastore folder for each VCH. Do not designate the same datastore folder as the volume store for multiple VCHs.
If you are deploying the VCH to a vCenter Server cluster, the datastore that you designate in the volume-store
option should be shared by at least two ESXi hosts in the cluster. Using non-shared datastores is possible and vic-machine create
succeeds, but it issues a warning that this configuration limits the use of vSphere features such as vSphere vMotion and DRS.
The label that you specify is the volume store name that Docker uses. For example, the volume store label appears in the information for a VCH when container developers run docker info
. Container developers specify the volume store label in the docker volume create --opt VolumeStore=volume_store_label
option when they create a volume.
If you specify an invalid datastore name, vic-machine create
fails and suggests valid datastores.
IMPORTANT If you do not specify the volume-store
option, no volume store is created and container developers cannot use the docker volume create
or docker create -v
commands.
If you only require one volume store, you can set the volume store label to
default
. If you set the volume store label todefault
, container developers do not need to specify the--opt VolumeStore=volume_store_label
option when they rundocker volume create
.NOTE: If container developers intend to use
docker create -v
to create containers that are attached to anonymous or named volumes, you must create a volume store with a label ofdefault
.--volume-store datastore_name:default
If you specify the target datastore and the volume store label,
vic-machine create
creates a folder namedVIC/volumes
at the top level of the target datastore. Any volumes that container developers create will appear in theVIC/volumes
folder.--volume-store datastore_name:volume_store_label
If you specify the target datastore, a datastore path, and the volume store label,
vic-machine create
creates a folder namedvolumes
in the location that you specify in the datastore path. Any volumes that container developers create will appear in thepath/volumes
folder.--volume-store datastore_name/datastore_path:volume_store_label
Wrap the datastore name and path in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if they include spaces. The volume store label cannot include spaces.
--volume-store 'datastore name'/'datastore path':volume_store_label
You can specify the
volume-store
option multiple times, to create multiple volume stores for the VCH.--volume-store datastore_name/path:volume_store_label_1 --volume-store datastore_name/path:volume_store_label_2 [...] --volume-store datastore_name/path:volume_store_label_n
Networking Options
The vic-machine create
utility allows you to specify different networks for the different types of traffic between containers, the VCH, the external internet, and your vSphere environment. For information about the different networks that VCHs use, see Networks Used by vSphere Integrated Containers Engine.
IMPORTANT: A VCH supports a maximum of 3 distinct network interfaces. Because the bridge network requires its own port group, at least two of the public, client, and management networks must share a network interface and therefore a port group. Container networks do not go through the VCH, so they are not subject to this limitation. This limitation will be removed in a future release.
By default, vic-machine create
obtains IP addresses for VCH endpoint VMs by using DHCP. For information about how to specify a static IP address for the VCH endpoint VM on the client, public, and management networks, see Specify a Static IP Address for the VCH Endpoint VM in Advanced Options.
If your network access is controlled by a proxy server, see Options to Configure VCHs to Use Proxy Servers in Advanced Options.
When you specify different network interfaces for the different types of traffic, vic-machine create
checks that the firewalls on the ESXi hosts allow connections to port 2377 from those networks. If access to port 2377 on one or more ESXi hosts is subject to IP address restrictions, and if those restrictions block access to the network interfaces that you specify, vic-machine create
fails with a firewall configuration error:
Firewall configuration incorrect due to allowed IP restrictions on hosts: "/ha-datacenter/host/localhost.localdomain/localhost.localdomain" Firewall must permit dst 2377/tcp outbound to the VCH management interface
For information about how to open port 2377, see Open the Required Ports on ESXi Hosts.
--bridge-network
Short name: -b
A port group that container VMs use to communicate with each other.
The bridge-network
option is mandatory if you are deploying a VCH to vCenter Server.
In a vCenter Server environment, before you run vic-machine create
, you must create a distributed virtual switch and a port group. You must add the target ESXi host or hosts to the distributed virtual switch, and assign a VLAN ID to the port group, to ensure that the bridge network is isolated. For information about how to create a distributed virtual switch and port group, see the section on vCenter Server Network Requirements in Environment Prerequisites for VCH Deployment.
You pass the name of the port group to the bridge-network
option. Each VCH requires its own port group.
IMPORTANT
- Do not assign the same
bridge-network
port group to multiple VCHs. Sharing a port group between VCHs might result in multiple container VMs being assigned the same IP address. - Do not use the
bridge-network
port group as the target for any of the othervic-machine create
networking options.
If you specify an invalid port group name, vic-machine create
fails and suggests valid port groups.
The bridge-network
option is optional when you are deploying a VCH to an ESXi host with no vCenter Server. In this case, if you do not specify bridge-network
, vic-machine
creates a virtual switch and a port group that each have the same name as the VCH. You can optionally specify this option to assign an existing port group for use as the bridge network for container VMs. You can also optionally specify this option to create a new virtual switch and port group that have a different name to the VCH.
--bridge-network port_group_name
Wrap the port group name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes spaces.
--bridge-network 'port group name'
If you intend to use the --ops-user
option to use different user accounts for deployment and operation of the VCH, you must place the bridge network port group in a network folder that has the Read-Only
role with propagation enabled. For more information about the requirements when using --ops-user
, see Use Different User Accounts for VCH Deployment and Operation.
For information about how to specify a range of IP addresses for additional bridge networks, see bridge-network-range
in Advanced Networking Options.
--client-network
Short name: --cln
A port group on which the VCH will make the Docker API available to Docker clients. Docker clients use this network to issue Docker API requests to the VCH.
If not specified, the VCH uses the public network for client traffic. If you specify an invalid port group name, vic-machine create
fails and suggests valid port groups.
--client-network port_group_name
Wrap the port group name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes spaces.
--client-network 'port group name'
--public-network
Short name: --pn
A port group for containers to use to connect to the Internet. VCHs use the public network to pull container images, for example from https://hub.docker.com/. Containers that use use port mapping expose network services on the public interface.
NOTE: vSphere Integrated Containers Engine adds a new capability to Docker that allows you to directly map containers to a network by using the --container-network
option. This is the recommended way to deploy container services.
If not specified, containers use the VM Network for public network traffic. If you specify an invalid port group name, vic-machine create
fails and suggests valid port groups.
--public-network port_group
Wrap the network name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes spaces.
--public-network 'port group name'
--management-network
Short name: --mn
A port group that the VCH uses to communicate with vCenter Server and ESXi hosts. Container VMs use this network to communicate with the VCH.
IMPORTANT: Because the management network provides access to your vSphere environment, and because container VMs use this network to communicate with the VCH, always use a secure network for the management network.
When you create a VCH, vic-machine create
checks that the firewall on ESXi hosts allows connections to port 2377 from the management network of the VCH. If access to port 2377 on ESXi hosts is subject to IP address restrictions, and if those restrictions block access to the management network interface, vic-machine create
fails with a firewall configuration error:
Firewall configuration incorrect due to allowed IP restrictions on hosts: "/ha-datacenter/host/localhost.localdomain/localhost.localdomain" Firewall must permit dst 2377/tcp outbound to the VCH management interface
For information about how to open port 2377, see Open the Required Ports on ESXi Hosts.
NOTE: If the management network uses DHCP, vic-machine
checks the firewall status of the management network before the VCH receives an IP address. It is therefore not possible to fully assess whether the firewall permits the IP address of the VCH. In this case, vic-machine create
issues a warning.
Unable to fully verify firewall configuration due to DHCP use on management network VCH management interface IP assigned by DHCP must be permitted by allowed IP settings Firewall allowed IP configuration may prevent required connection on hosts: "/ha-datacenter/host/localhost.localdomain/localhost.localdomain" Firewall must permit dst 2377/tcp outbound to the VCH management interface
If not specified, the VCH uses the public network for management traffic. If you specify an invalid port group name, vic-machine create
fails and suggests valid port groups.
--management-network port_group_name
Wrap the network name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes spaces.
--management-network 'port group name'
--container-network
Short name: --cn
A port group for container VMs to use for external communication when container developers run docker run
or docker create
with the --net
option.
You can optionally specify one or more container networks. Container networks allow containers to directly attach to a network without having to route through the VCH via network address translation (NAT). Container networks that you add by using the --container-network
option appear when you run the docker network ls
command. These networks are available for use by containers. Containers that use these networks are directly attached to the container network, and do not go through the VCH or share the public IP of the VCH.
IMPORTANT: For security reasons, whenever possible, use separate port groups for the container network and the management network.
To specify a container network, you provide the name of a port group for the container VMs to use, and an optional descriptive name for the container network for use by Docker. If you do not specify a descriptive name, Docker uses the vSphere network name.
IMPORTANT: The descriptive name is optional unless the port group name contains spaces. If the port group name contains spaces, you must specify a descriptive name. The descriptive name cannot contain spaces.
If you specify an invalid port group name, vic-machine create
fails and suggests valid port groups.
- You can specify a vSphere network as the container network.
- The port group must exist before you run
vic-machine create
. - You cannot use the same port group as you use for the bridge network.
- You can create the port group on the same distributed virtual switch as the port group that you use for the bridge network.
- If the port group that you specify in the
container-network
option does not support DHCP, see Options for Configuring a Non-DHCP Network for Container Traffic in Advanced Options. - The descriptive name appears under
Networks
when you rundocker info
ordocker network ls
on the deployed VCH. - Container developers use the descriptive name in the
--net
option when they rundocker run
ordocker create
.
You can specify --container-network
multiple times to add multiple vSphere networks to Docker.
If you do not specify --container-network
, or if you deploy containers that do not use a container network, the containers' network services are still be available via port mapping through the VCH, by using NAT through the public interface of the VCH.
--container-network port_group_name:container_port _group_name
Wrap the port group name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes spaces. The descriptive name cannot include spaces.
--container-network 'port group name':container port group name
If you intend to use the --ops-user
option to use different user accounts for deployment and operation of the VCH, you must place any container network port groups in a network folder that has the Read-Only
role with propagation enabled. For more information about the requirements when using --ops-user
, see Use Different User Accounts for VCH Deployment and Operation.
General Deployment Options
The vic-machine
utility provides options to customize the VCH.
--name
Short name: -n
A name for the VCH. If not specified, vic-machine
sets the name of the VCH to virtual-container-host
. If a VCH of the same name exists on the ESXi host or in the vCenter Server inventory, or if a folder of the same name exists in the target datastore, vic-machine create
creates a folder named vch_name_1
. If the name that you provide contains unsupported characters, vic-machine create
fails with an error.
--name vch_name
Wrap the name in single quotes (') on Mac OS and Linux and in double quotes (") on Windows if it includes spaces.
--name 'vch name'
--memory
Short name: --mem
Limit the amount of memory that is available for use by the VCH vApp in vCenter Server, or for the VCH resource pool on an ESXi host. This limit also applies to the container VMs that run in the VCH vApp or resource pool. Specify the memory limit value in MB. If not specified, vic-machine create
sets the limit to 0 (unlimited).
--memory 1024
--cpu
Short name: None
Limit the amount of CPU capacity that is available for use by the VCH vApp in vCenter Server, or for the VCH resource pool on an ESXi host. This limit also applies to the container VMs that run in the VCH vApp or resource pool. Specify the CPU limit value in MHz. If not specified, vic-machine create
sets the limit to 0 (unlimited).
--cpu 1024
--force
Short name: -f
Forces vic-machine create
to ignore warnings and non-fatal errors and continue with the deployment of a VCH. Errors such as an incorrect compute resource still cause the deployment to fail.
If your vSphere environment uses untrusted, self-signed certificates, you can use the --force
option to deploy a VCH without providing the thumbprint of the vCenter Server or ESXi host in the thumbprint
option.
IMPORTANT Running vic-machine create
with the --force
option rather than providing the certificate thumbprint is not recommended, because it permits man-in-the-middle attacks to go undetected.
--force
--timeout
Short name: none
The timeout period for uploading the vSphere Integrated Containers Engine files and ISOs to the ESXi host, and for powering on the VCH. Specify a value in the format XmYs
if the default timeout of 3m0s is insufficient.
--timeout 5m0s
Advanced Options
The options in this section are exposed in the vic-machine create
help if you run vic-machine create --extended-help
, or vic-machine create -x
.
Options for Specifying a Static IP Address for the VCH Endpoint VM
You can specify a static IP address for the VCH endpoint VM on each of the client, public, and management networks. DHCP is used for the endpoint VM for any network on which you do not specify a static IP address.
To specify a static IP address for the endpoint VM on the client, public, or management network, you provide an IP address in the client/public/management-network-ip
option. If you set a static IP address, you can optionally provide gateway addresses and specify one or more DNS server addresses.
--dns-server
Short name: None
A DNS server for the VCH endpoint VM to use on the client, public, or management networks. You can specify dns-server
multiple times, to configure multiple DNS servers.
- If you specify
dns-server
,vic-machine create
always uses the--dns-server
setting for all three of the client, public, and management networks. - If you do not specify
dns-server
and you specify a static IP address for the endpoint VM on all three of the client, public, and management networks,vic-machine create
uses the Google public DNS service. - If you do not specify
dns-server
and you use a mixture of static IP addresses and DHCP for the client, public, and management networks,vic-machine create
uses the DNS servers that DHCP provides. - If you do not specify
dns-server
and you use DHCP for all of the client, public, and management networks,vic-machine create
uses the DNS servers that DHCP provides.
--dns-server=172.16.10.10 --dns-server=172.16.10.11
--client-network-ip
, --public-network-ip
, --management-network-ip
Short name: None
A static IP address for the VCH endpoint VM on the public, client, or management network.
You specify a static IP address for the endpoint VM on the public, client, or management networks by using the --public/client/management-network-ip
options. If you set a static IP address for the endpoint VM on the public network, you must specify a corresponding gateway address by using the --public-network-gateway
option. If the management and client networks are L2 adjacent to their gateways, you do not need to specify the gateway for those networks.
- You can only specify one static IP address on a given port group. If more than one of the client, public, or management networks share a port group, you can only specify a static IP address on one of those networks. All of the networks that share that port group use the IP address that you specify.
- If either of the client or management networks shares a port group with the public network, you can only specify a static IP address on the public network.
- If either or both of the client or management networks do not use the same port group as the public network, you can specify a static IP address for the endpoint VM on those networks by using
--client-network-ip
or--management-network-ip
, or both. In this case, you must specify a corresponding gateway address by usingclient/management-network-gateway
. - If the client and management networks both use the same port group, and the public network does not use that port group, you can set a static IP address for the endpoint VM on either or both of the client and management networks.
If you assign a static IP address to the VCH endpoint VM on the client network by setting the
--client-network-ip
option, and you do not specify one of the TLS options,vic-machine create
uses this address as the Common Name with which to auto-generate trusted CA certificates. If you do not specify--tls-cname
,--no-tls
or--no-tlsverify
, two-way TLS authentication with trusted certificates is implemented by default when you deploy the VCH with a static IP address on the client network. If you assign a static IP address to the endpoint VM on the client network,vic-machine create
creates the same certificate and environment variable files as described in the--tls-cname
option.IMPORTANT: If the client network shares a port group with the public network you cannot set a static IP address for the endpoint VM on the client network. To assign a static IP address to the endpoint VM you must set a static IP address on the public network by using the
--public-network-ip
option. In this case,vic-machine create
uses the public network IP address as the Common Name with which to auto-generate trusted CA certificates, in the same way as it would for the client network.If you do not specify an IP address for the endpoint VM on a given network,
vic-machine create
uses DHCP to obtain an IP address for the endpoint VM on that network.
You specify addresses as IPv4 addresses with a network mask.
--public-network-ip 192.168.X.N/24 --management-network-ip 192.168.Y.N/24 --client-network-ip 192.168.Z.N/24
You can also specify addresses as resolvable FQDNs.
--public-network-ip=vch27-team-a.internal.domain.com --management-network-ip=vch27-team-b.internal.domain.com --client-network-ip=vch27-team-c.internal.domain.com
--client-network-gateway
, --public-network-gateway
, --management-network-gateway
Short name: None
The gateway to use if you use --public/client/management-network-ip
to specify a static IP address for the VCH endpoint VM on the public, client, or management networks. If you specify a static IP address on the public network, you must specify a gateway by using the --public-network-gateway
option. If the management and client networks are L2 adjacent to their gateways, you do not need to specify the gateway for those networks.
You specify gateway addresses as IP addresses without a network mask.
--public-network-gateway 192.168.X.1
The default route for the VCH endpoint VM is always on the public network. As a consequence, if you specify a static IP address on either of the management or client networks and those networks are not L2 adjacent to their gateways, you must specify the routing destination for those networks in the --management-network-gateway
and --client-network-gateway
options. You specify the routing destination or destinations in a comma-separated list, with the address of the gateway separated from the routing destinations by a colon (:).
--management-network-gateway routing_destination_1, routing_destination_2:gateway_address
--client-network-gateway routing_destination_1, routing_destination_2:gateway_address
In the following example, --management-network-gateway
informs the VCH that it can reach all of the vSphere management endoints that are in the ranges 192.168.3.0-255 and 192.168.128.0-192.168.131.255 by sending packets to the gateway at 192.168.2.1. Ensure that the address ranges that you specify include all of the systems that will connect to this VCH instance.
--management-network-gateway 192.168.3.0,192.168.128.0:192.168.2.1
Options for Configuring a Non-DHCP Network for Container Traffic
If the network that you specify in the container-network
option does not support DHCP, you must specify the container-network-gateway
option. You can optionally specify one or more DNS servers and a range of IP addresses for container VMs on the container network.
For information about the container network, see the section on the container-network
option.
--container-network-gateway
Short name: --cng
The gateway for the subnet of the container network. This option is required if the network that you specify in the --container-network
option does not support DHCP. Specify the gateway in the format container_network:subnet
. If you specify this option, it is recommended that you also specify the --container-network-dns
option.
When you specify the container network gateway, you must use the port group that you specify in the --container-network
option. If you specify --container-network-gateway
but you do not specify --container-network
, or if you specify a different port group to the one that you specify in --container-network
, vic-machine create
fails with an error.
--container-network-gateway port_group_name:gateway_ip_address/subnet_mask
Wrap the port group name in single quotes (Linux or Mac OS) or double quotes (Windows) if it includes spaces.
--container-network-gateway 'port group name':gateway_ip_address/subnet_mask
--container-network-dns
Short name: --cnd
The address of the DNS server for the container network. This option is recommended if the network that you specify in the --container-network
option does not support DHCP.
When you specify the container network DNS server, you must use the port group that you specify in the --container-network
option. You can specify --container-network-dns
multiple times, to configure multiple DNS servers. If you specify --container-network-dns
but you do not specify --container-network
, or if you specify a different port group to the one that you specify in --container-network
, vic-machine create
fails with an error.
--container-network-dns port_group_name:8.8.8.8
Wrap the port group name in single quotes (Linux or Mac OS) or double quotes (Windows) if it includes spaces.
--container-network-dns 'port group name':8.8.8.8
--container-network-ip-range
Short name: --cnr
The range of IP addresses that container VMs can use if the network that you specify in the container-network
option does not support DHCP. If you specify --container-network-ip-range
, VCHs manage the addresses for containers within that range. The range that you specify must not be used by other computers or VMs on the network. You must also specify --container-network-ip-range
if container developers need to deploy containers with static IP addresses. If you specify container-network-gateway
but do not specify --container-network-ip-range
, the IP range for container VMs is the entire subnet that you specify in --container-network-gateway
.
When you specify the container network IP range, you must use the port group that you specify in the --container-network
option. If you specify --container-network-ip-range
but you do not specify --container-network
, or if you specify a different port group to the one that you specify in --container-network
, vic-machine create
fails with an error.
--container-network-ip-range port_group_name:192.168.100.2-192.168.100.254
You can also specify the IP range as a CIDR.
--container-network-ip-range port_group_name:192.168.100.0/24
Wrap the port group name in single quotes (Linux or Mac OS) or double quotes (Windows) if it includes spaces.
--container-network-ip-range 'port group name':192.168.100.0/24
Options to Configure VCHs to Use Proxy Servers
If access to the Internet or to your private image registries requires the use of a proxy server, you must configure a VCH to connect to the proxy server when you deploy it. The proxy is used only when pulling images, and not for any other purpose.
IMPORTANT: Configuring a VCH to use a proxy server does not configure proxy support on the containers that this VCH runs. Container developers must configure proxy servers on containers when they create them.
--https-proxy
Short name: --sproxy
The address of the HTTPS proxy server through which the VCH accesses image registries when using HTTPS. Specify the address of the proxy server as either an FQDN or an IP address.
--https-proxy https://proxy_server_address:port
--http-proxy
Short name: --hproxy
The address of the HTTP proxy server through which the VCH accesses image registries when using HTTP. Specify the address of the proxy server as either an FQDN or an IP address.
--http-proxy http://proxy_server_address:port
Advanced Resource Management Options
You can set limits on the memory and CPU shares and reservations on the VCH. For information about memory and CPU shares and reservations, see Allocate Memory Resources, and Allocate CPU Resources in the vSphere documentation.
--memory-reservation
Short name: --memr
Reserve a quantity of memory for use by the VCH vApp in vCenter Server, or for the VCH resource pool on an ESXi host. This limit also applies to the container VMs that run in the VCH vApp or resource pool. Specify the memory reservation value in MB. If not specified, vic-machine create
sets the reservation to 1.
--memory-reservation 1024
--memory-shares
Short name: --mems
Set memory shares on the VCH vApp in vCenter Server, or on the VCH resource pool on an ESXi host. This limit also applies to the container VMs that run in the VCH vApp or resource pool. Specify the share value as a level or a number, for example high
, normal
, low
, or 163840
. If not specified, vic-machine create
sets the share to normal
.
--memory-shares low
--cpu-reservation
Short name: --cpur
Reserve a quantity of CPU capacity for use by the VCH vApp in vCenter Server, or for the VCH resource pool on an ESXi host. This limit also applies to the container VMs that run in the VCH vApp or resource pool. Specify the CPU reservation value in MHz. If not specified, vic-machine create
sets the reservation to 1.
--cpu-reservation 1024
--cpu-shares
Short name: --cpus
Set CPU shares on the VCH vApp in vCenter Server, or on the VCH resource pool on an ESXi host. This limit also applies to the container VMs that run in the VCH vApp or resource pool. Specify the share value as a level or a number, for example high
, normal
, low
, or 163840
. If not specified, vic-machine create
sets the share to normal
.
--cpu-shares low
--endpoint-cpu
Short name: none
The number of virtual CPUs for the VCH endpoint VM. The default is 1. Set this option to increase the number of CPUs in the VCH endpoint VM.
NOTE Always use the --cpu
option instead of the --endpoint-cpu
option to increase the overall CPU capacity of the VCH vApp, rather than increasing the number of CPUs on the VCH endpoint VM. The --endpoint-cpu
option is mainly intended for use by VMware Support.
--endpoint-cpu number_of_CPUs
--endpoint-memory
Short name: none
The amount of memory for the VCH endpoint VM. The default is 2048MB. Set this option to increase the amount of memory in the VCH endpoint VM if the VCH will pull large container images.
NOTE With the exception of VCHs that pull large container images, always use the --memory
option instead of the --endpoint-memory
option to increase the overall amount of memory for the VCH vApp, rather than on the VCH endpoint VM. Use docker create -m
to set the memory on container VMs. The --endpoint-memory
option is mainly intended for use by VMware Support.
--endpoint-memory amount_of_memory
Other Advanced Options
--bridge-network-range
Short name: --bnr
The range of IP addresses that additional bridge networks can use when container application developers use docker network create
to create new bridge networks. If you do not specify the bridge-network-range
option, the IP range for bridge networks is 172.16.0.0/12.
When you specify the bridge network IP range, you specify the IP range as a CIDR. The smallest subnet that you can specify is /16. If you specify an invalid value for --bridge-network-range
, vic-machine create
fails with an error.
--bridge-network-range 192.168.100.0/16
--base-image-size
Short name: None
The size of the base image from which to create other images. You should not normally need to use this option. Specify the size in GB
or MB
. The default size is 8GB. Images are thin-provisioned, so they do not usually consume 8GB of space.
--base-image-size 4GB
--container-store
Short name: --cs
The container-store
option is not enabled. Container VM files are stored in the datastore that you designate as the image store.
--appliance-iso
Short name: --ai
The path to the ISO image from which the VCH appliance boots. Set this option if you have moved the appliance.iso
file to a folder that is not the folder that contains the vic-machine
binary or is not the folder from which you are running vic-machine
. Include the name of the ISO file in the path.
NOTE: Do not use the --appliance-iso
option to point vic-machine
to an --appliance-iso
file that is of a different version to the version of vic-machine
that you are running.
--appliance-iso path_to_ISO_file/appliance.iso
Wrap the folder names in the path in single quotes (Linux or Mac OS) or double quotes (Windows) if they include spaces.
--appliance-iso 'path to ISO file'/appliance.iso
--bootstrap-iso
Short name: --bi
The path to the ISO image from which to boot container VMs. Set this option if you have moved the bootstrap.iso
file to a folder that is not the folder that contains the vic-machine
binary or is not the folder from which you are running vic-machine
. Include the name of the ISO file in the path.
NOTE: Do not use the --bootstrap-iso
option to point vic-machine
to a --bootstrap-iso
file that is of a different version to the version of vic-machine
that you are running.
--bootstrap-iso path_to_ISO_file/bootstrap.iso
Wrap the folder names in the path in single quotes (Linux or Mac OS) or double quotes (Windows) if they include spaces.
--bootstrap-iso 'path to ISO file'/bootstrap.iso
--use-rp
Short name: none
Deploy the VCH appliance to a resource pool on vCenter Server rather than to a vApp. If you specify this option, vic-machine create
creates a resource pool with the same name as the VCH.
--use-rp
--debug
Short name: -v
Deploy the VCH with more verbose levels of logging, and optionally modify the behavior of vic-machine
for troubleshooting purposes. Specifying the --debug
option increases the verbosity of the logging for all aspects of VCH operation, not just deployment. For example, by setting the --debug
option, you increase the verbosity of the logging for VCH initialization, VCH services, container VM initialization, and so on. If not specified, the --debug
value is set to 0 and verbose logging is disabled.
NOTE: Do not confuse the vic-machine create --debug
option with the vic-machine debug
command, that enables access to the VCH endpoint VM. For information about vic-machine debug
, see Debugging the VCH.
When you specify vic-machine create --debug
, you set a debugging level of 1, 2, or 3. Setting --debug
to 2 or 3 changes the behavior of vic-machine create
as well as increasing the level of verbosity of the logs:
--debug 1
Provides extra verbosity in the logs, with no other changes tovic-machine
behavior.--debug 2
Exposes servers on more interfaces, launchespprof
in container VMs.--debug 3
Disables recovery logic and logs sensitive data. Disables the restart of failed components and prevents container VMs from shutting down. Logs environment details for user application, and collects application output in the log bundle.
Additionally, deploying a VCH with a --debug 3
enables SSH access to the VCH endpoint VM console by default, with a root password of password
, without requiring you to run the vic-machine debug
command. This functionality enables you to perform targeted interactive diagnostics in environments in which a VCH endpoint VM failure occurs consistently and in a fashion that prevents vic-machine debug
from functioning.
IMPORTANT: There is no provision for persistently changing the default root password. Only use this configuration for debugging in a secured environment.