ceilometer/doc/source/admin/telemetry-measurements.rst
Jaromir Wysoglad 3922db4f3d Add pool capacity pollsters
This adds code to retrieve pool capacity metrics
via the cinder API.

Change-Id: Ib84f33a91b6a69f56b8cff9da431720cb58f5d33
2025-03-26 18:24:18 +09:00

40 KiB

Measurements

The Telemetry service collects meters within an OpenStack deployment. This section provides a brief summary about meters format and origin and also contains the list of available meters.

Telemetry collects meters by polling the infrastructure elements and also by consuming the notifications emitted by other OpenStack services. For more information about the polling mechanism and notifications see telemetry-data-collection. There are several meters which are collected by polling and by consuming. The origin for each meter is listed in the tables below.

Note

You may need to configure Telemetry or other OpenStack services in order to be able to collect all the samples you need. For further information about configuration requirements see the Telemetry chapter in the Installation Tutorials and Guides.

Telemetry uses the following meter types:

Type Description
Cumulative Increasing over time (instance hours)
Delta Changing over time (bandwidth)
Gauge Discrete items (floating IPs, image uploads) and fluctuating values (disk I/O)

Telemetry provides the possibility to store metadata for samples. This metadata can be extended for OpenStack Compute and OpenStack Object Storage.

In order to add additional metadata information to OpenStack Compute you have two options to choose from. The first one is to specify them when you boot up a new instance. The additional information will be stored with the sample in the form of resource_metadata.user_metadata.*. The new field should be defined by using the prefix metering.. The modified boot command look like the following:

$ openstack server create --property metering.custom_metadata=a_value my_vm

The other option is to set the reserved_metadata_keys to the list of metadata keys that you would like to be included in resource_metadata of the instance related samples that are collected for OpenStack Compute. This option is included in the DEFAULT section of the ceilometer.conf configuration file.

You might also specify headers whose values will be stored along with the sample data of OpenStack Object Storage. The additional information is also stored under resource_metadata. The format of the new field is resource_metadata.http_header_$name, where $name is the name of the header with - replaced by _.

For specifying the new header, you need to set metadata_headers option under the [filter:ceilometer] section in proxy-server.conf under the swift folder. You can use this additional data for instance to distinguish external and internal users.

Measurements are grouped by services which are polled by Telemetry or emit notifications that this service consumes.

OpenStack Compute

The following meters are collected for OpenStack Compute.

Name Type Unit Resource Origin Support Note
**Meters ad ded in t he Mita ka release or earlier*
memory Gauge MB instance ID Notification Libvirt Volume of RAM allocated to the instance
memory.usage Gauge MB instance ID Pollster Libvirt, Volume of RAM used by the instance from the amount of its allocated memory
memory.resident Gauge MB instance ID Pollster Libvirt Volume of RAM used by the instance on the physical machine
cpu Cumulative ns instance ID Pollster Libvirt CPU time used
vcpus Gauge vcpu instance ID Notification Libvirt Number of virtual CPUs allocated to the instance
disk.device.read.requests Cumulative request disk ID Pollster Libvirt Number of read requests
disk.device.write.requests Cumulative request disk ID Pollster Libvirt Number of write requests
disk.device.read.bytes Cumulative B disk ID Pollster Libvirt Volume of reads
disk.device.write.bytes Cumulative B disk ID Pollster Libvirt Volume of writes
disk.root.size Gauge GB instance ID Notification, Pollster Libvirt Size of root disk
disk.ephemeral.size Gauge GB instance ID Notification, Pollster Libvirt Size of ephemeral disk
disk.device.capacity Gauge B disk ID Pollster Libvirt The amount of disk per device that the instance can see
disk.device.allocation Gauge B disk ID Pollster Libvirt The amount of disk per device occupied by the instance on the host machine
disk.device.usage Gauge B disk ID Pollster Libvirt The physical size in bytes of the image container on the host per device
network.incoming.bytes Cumulative B interface ID Pollster Libvirt Number of incoming bytes
network.outgoing.bytes Cumulative B interface ID Pollster Libvirt Number of outgoing bytes
network.incoming.packets Cumulative packet interface ID Pollster Libvirt Number of incoming packets
network.outgoing.packets Cumulative packet interface ID Pollster Libvirt Number of outgoing packets
**Meters ad ded in t he Newt on release*
perf.cpu.cycles Gauge cycle instance ID Pollster Libvirt the number of cpu cycles one instruction needs
perf.instructions Gauge instruction instance ID Pollster Libvirt the count of instructions
perf.cache.references Gauge count instance ID Pollster Libvirt the count of cache hits
perf.cache.misses Gauge count instance ID Pollster Libvirt the count of cache misses
**Meters ad ded in t he Ocat a release**
network.incoming.packets.drop Cumulative packet interface ID Pollster Libvirt Number of incoming dropped packets
network.outgoing.packets.drop Cumulative packet interface ID Pollster Libvirt Number of outgoing dropped packets
network.incoming.packets.error Cumulative packet interface ID Pollster Libvirt Number of incoming error packets
network.outgoing.packets.error Cumulative packet interface ID Pollster Libvirt Number of outgoing error packets
**Meters ad ded in t he Pike release**
memory.swap.in Cumulative MB instance ID Pollster Libvirt Memory swap in
memory.swap.out Cumulative MB instance ID Pollster Libvirt Memory swap out
**Meters ad ded in t he Quee ns release*
disk.device.read.latency Cumulative ns Disk ID Pollster Libvirt Total time read operations have taken
disk.device.write.latency Cumulative ns Disk ID Pollster Libvirt Total time write operations have taken
**Meters ad ded in t he Epox y release**
power.state Gauge state instance ID Pollster Libvirt virDomainState of the VM

Note

To enable the libvirt memory.usage support, you need to install libvirt version 1.1.1+, QEMU version 1.5+, and you also need to prepare suitable balloon driver in the image. It is applicable particularly for Windows guests, most modern Linux distributions already have it built in. Telemetry is not able to fetch the memory.usage samples without the image balloon driver.

Note

To enable libvirt disk.* support when running on RBD-backed shared storage, you need to install libvirt version 1.2.16+.

OpenStack Compute is capable of collecting CPU related meters from the compute host machines. In order to use that you need to set the compute_monitors option to cpu.virt_driver in the nova.conf configuration file. For further information see the Compute configuration section in the Compute chapter of the OpenStack Configuration Reference.

The following host machine related meters are collected for OpenStack Compute:

Name Type Unit Resource Origin Note
**Meters added in the Mitaka release or earlier **
compute.node.cpu.frequency Gauge MHz host ID Notification CPU frequency
compute.node.cpu.kernel.time Cumulative ns host ID Notification CPU kernel time
compute.node.cpu.idle.time Cumulative ns host ID Notification CPU idle time
compute.node.cpu.user.time Cumulative ns host ID Notification CPU user mode time
compute.node.cpu.iowait.time Cumulative ns host ID Notification CPU I/O wait time
compute.node.cpu.kernel.percent Gauge % host ID Notification CPU kernel percentage
compute.node.cpu.idle.percent Gauge % host ID Notification CPU idle percentage
compute.node.cpu.user.percent Gauge % host ID Notification CPU user mode percentage
compute.node.cpu.iowait.percent Gauge % host ID Notification CPU I/O wait percentage
compute.node.cpu.percent Gauge % host ID Notification CPU utilization

IPMI meters

Telemetry captures notifications that are emitted by the Bare metal service. The source of the notifications are IPMI sensors that collect data from the host machine.

Alternatively, IPMI meters can be generated by deploying the ceilometer-agent-ipmi on each IPMI-capable node. For further information about the IPMI agent see telemetry-ipmi-agent.

Warning

To avoid duplication of metering data and unnecessary load on the IPMI interface, do not deploy the IPMI agent on nodes that are managed by the Bare metal service and keep the conductor.send_sensor_data option set to False in the ironic.conf configuration file.

The following IPMI sensor meters are recorded:

Name Type Unit Resource Origin Note
**Meters added in the Mita ka rele ase or earl ier**
hardware.ipmi.fan Gauge RPM fan sensor Notification, Pollster Fan rounds per minute (RPM)
hardware.ipmi.temperature Gauge C temperature sensor Notification, Pollster Temperature reading from sensor
hardware.ipmi.current Gauge A current sensor Notification, Pollster Current reading from sensor
hardware.ipmi.voltage Gauge V voltage sensor Notification, Pollster Voltage reading from sensor

Note

The sensor data is not available in the Bare metal service by default. To enable the meters and configure this module to emit notifications about the measured values see the Installation Guide for the Bare metal service.

Besides generic IPMI sensor data, the following Intel Node Manager meters are recorded from capable platform:

Name Type Unit Resource Origin Note
**Meters added in the Mitaka release or earlier **
hardware.ipmi.node.power Gauge W host ID Pollster Current power of the system
hardware.ipmi.node.temperature Gauge C host ID Pollster Current temperature of the system
hardware.ipmi.node.inlet_temperature Gauge C host ID Pollster Inlet temperature of the system
hardware.ipmi.node.outlet_temperature Gauge C host ID Pollster Outlet temperature of the system
hardware.ipmi.node.airflow Gauge CFM host ID Pollster Volumetric airflow of the system, expressed as 1/10th of CFM
hardware.ipmi.node.cups Gauge CUPS host ID Pollster CUPS(Compute Usage Per Second) index data of the system
hardware.ipmi.node.cpu_util Gauge % host ID Pollster CPU CUPS utilization of the system
hardware.ipmi.node.mem_util Gauge % host ID Pollster Memory CUPS utilization of the system
hardware.ipmi.node.io_util Gauge % host ID Pollster IO CUPS utilization of the system

OpenStack Image service

The following meters are collected for OpenStack Image service:

Name Type Unit Resource Origin Note
**Meters added in th e Mitaka release or earlier **
image.size Gauge B image ID Notification, Pollster Size of the uploaded image
image.download Delta B image ID Notification Image is downloaded
image.serve Delta B image ID Notification Image is served out

OpenStack Block Storage

The following meters are collected for OpenStack Block Storage:

Name Type Unit Resource Origin Note
**Meters added in th e Mitaka release or earlier*
volume.size Gauge GB volume ID Notification Size of the volume
snapshot.size Gauge GB snapshot ID Notification Size of the snapshot
**Meters added in th e Queens release*
volume.provider.capacity.total Gauge GB hostname Notification Total volume capacity on host
volume.provider.capacity.free Gauge GB hostname Notification Free volume capacity on host
volume.provider.capacity.allocated Gauge GB hostname Notification Assigned volume capacity on host by Cinder
volume.provider.capacity.provisioned Gauge GB hostname Notification Assigned volume capacity on host
volume.provider.capacity.virtual_free Gauge GB hostname Notification Virtual free volume capacity on host
volume.provider.pool.capacity.total Gauge GB hostname#pool Notification, Pollster Total volume capacity in pool
volume.provider.pool.capacity.free Gauge GB hostname#pool Notification, Pollster Free volume capacity in pool
volume.provider.pool.capacity.allocated Gauge GB hostname#pool Notification, Pollster Assigned volume capacity in pool by Cinder
volume.provider.pool.capacity.provisioned Gauge GB hostname#pool Notification, Pollster Assigned volume capacity in pool
volume.provider.pool.capacity.virtual_free Gauge GB hostname#pool Notification, Pollster Virtual free volume capacity in pool

OpenStack File Share

The following meters are collected for OpenStack File Share:

Name Type Unit Resource Origin Note
**Meters added in th e Pike r elease**
manila.share.size Gauge GB share ID Notification Size of the file share

OpenStack Object Storage

The following meters are collected for OpenStack Object Storage:

Name Type Unit Resource Origin Note
**Meters added in th e Mitaka release or earlier**
storage.objects Gauge object storage ID Pollster Number of objects
storage.objects.size Gauge B storage ID Pollster Total size of stored objects
storage.objects.containers Gauge container storage ID Pollster Number of containers
storage.objects.incoming.bytes Delta B storage ID Notification Number of incoming bytes
storage.objects.outgoing.bytes Delta B storage ID Notification Number of outgoing bytes
storage.containers.objects Gauge object storage ID/container Pollster Number of objects in container
storage.containers.objects.size Gauge B storage ID/container Pollster Total size of stored objects in container

Ceph Object Storage

In order to gather meters from Ceph, you have to install and configure the Ceph Object Gateway (radosgw) as it is described in the Installation Manual. You also have to enable usage logging in order to get the related meters from Ceph. You will need an admin user with users, buckets, metadata and usage caps configured.

In order to access Ceph from Telemetry, you need to specify a service group for radosgw in the ceilometer.conf configuration file along with access_key and secret_key of the admin user mentioned above.

The following meters are collected for Ceph Object Storage:

Name Type Unit Resource Origin Note
**Meters added in the Mit aka relea se or earlier **
radosgw.objects Gauge object storage ID Pollster Number of objects
radosgw.objects.size Gauge B storage ID Pollster Total size of stored objects
radosgw.objects.containers Gauge container storage ID Pollster Number of containers
radosgw.api.request Gauge request storage ID Pollster Number of API requests against Ceph Object Gateway (radosgw)
radosgw.containers.objects Gauge object storage ID/container Pollster Number of objects in container
radosgw.containers.objects.size Gauge B storage ID/container Pollster Total size of stored objects in container

Note

The usage related information may not be updated right after an upload or download, because the Ceph Object Gateway needs time to update the usage properties. For instance, the default configuration needs approximately 30 minutes to generate the usage logs.

OpenStack Identity

The following meters are collected for OpenStack Identity:

Name Type Unit Resource Origin Note
**Meters added in t he Mita ka releas e or earlier **
identity.authenticate.success Delta user user ID Notification User successfully authenticated
identity.authenticate.pending Delta user user ID Notification User pending authentication
identity.authenticate.failure Delta user user ID Notification User failed to authenticate

OpenStack Networking

The following meters are collected for OpenStack Networking:

Name Type Unit Resource Origin Note
**Meters added in the Mit aka relea se or earlie r**
bandwidth Delta B label ID Notification Bytes through this l3 metering label

VPN-as-a-Service (VPNaaS)

The following meters are collected for VPNaaS:

Name Type Unit Resource Origin Note
**Meters added in the M itaka rele ase or earlie r**
network.services.vpn Gauge vpnservice vpn ID Pollster Existence of a VPN
network.services.vpn.connections Gauge ipsec_site_connection connection ID Pollster Existence of an IPSec connection

Firewall-as-a-Service (FWaaS)

The following meters are collected for FWaaS:

Name Type Unit Resource Origin Note
**Meters added in the M itaka rele ase or earlie r**
network.services.firewall Gauge firewall firewall ID Pollster Existence of a firewall
network.services.firewall.policy Gauge firewall_policy firewall ID Pollster Existence of a firewall policy