
The collect tool executes various Linux and system commands to gather data into an archived collect bundle. System administrators often need to run collect on busy, in-service systems. During such operations, they have reported excessive CPU usage, which can lead to undesirable CPU spikes caused by certain collect operations. While the collect tool already employs throttling for ssh and scp, its data collection and archiving commands currently lack similar safeguards. This update introduces the following enhancements to mitigate CPU spikes and improve performance on heavily loaded in-service servers: - removed one unnecessary tar archive operation. - add tar archive checkpoint option support with action handler - removed one redundant kubelet api-resources call in the containerization plugin. - add --chunk-size=50 support to all in one kubelet get api-resources command to help throttle this long running heavyweight command. 50 seems to yield the lowest k8s api latency as measured with the k8smetrics tool. - launch collect plugins with 'nice' and 'ionice' attributes. - add 'nice' and 'ionice' attributes to select commands. - add sleep delays after known cpu intensive data collection commands. - remove unnecessary -v (verbose) option to all tar commands. - add a run_command utility that times the execution of commands and adaptively adds a small post execution delay based on how long that command took to run. - reduce the cpu impact of the containerization plugin by adding periodic delays. - added a few periodic delays in long running or cpu intensive plugins - create a collect command timing log that is added to each the host collect tarball. - timing log file records how long it took for each plugin to run as well as commands called with the new run_command function. - fixed issue in networking plugin. - added a 60 second timeout for the 'lsof' heavyweight command. - fixed delimiter string hostname in all plugins. - increase the default global timeout from 20 to 30 minutes. - increase the default collect_host timeout from 600 to 900 seconds. - incremented tool minor version. These improvements aim to minimize the performance impact of running collect on busy in-service systems. Note: When a process is started with nice, its CPU priority is inherited by all threads spawned by that process. However, it does not restrict the total CPU time a process or its threads can use when no contention exists. Test Plan: PASS: Verify build and install of collect package. PASS: Verify collect runtime is not substantially longer. PASS: Verify tar checkpoint handling on busy system where checkpoint action handler detects and invokes system overload handling. PASS: Verify some CPU spike reduction compared to before update. Regression: PASS: Compare collect bundle size and contents before and after update. PASS: Soak collect on busy/overloaded AIO SX system. PASS: Verify report tool reports the same data before/after update. PASS: Verify multi-node collect Closes-Bug: 2090923 Change-Id: If698d5f275f4482de205fa4a37e0398b19800777 Signed-off-by: Eric MacDonald <eric.macdonald@windriver.com>
58 lines
2.0 KiB
Bash
58 lines
2.0 KiB
Bash
#! /bin/bash
|
|
#
|
|
# Copyright (c) 2022,2024 Wind River Systems, Inc.
|
|
#
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
#
|
|
|
|
# Loads Up Utilities and Commands Variables
|
|
|
|
source /usr/local/sbin/collect_parms
|
|
source /usr/local/sbin/collect_utils
|
|
|
|
SERVICE="ostree"
|
|
LOGFILE="${extradir}/${SERVICE}.info"
|
|
|
|
SYSROOT_REPO="/sysroot/ostree/repo"
|
|
FEED_OSTREE_BASE_DIR="/var/www/pages/feed"
|
|
OSTREE_REF="starlingx"
|
|
|
|
echo "${hostname}: OSTREE Info .......: ${LOGFILE}"
|
|
###############################################################################
|
|
# OSTREE Info:
|
|
###############################################################################
|
|
|
|
|
|
###############################################################################
|
|
# ostree admin status (deployment)
|
|
# -v outputs additional data to stderr
|
|
###############################################################################
|
|
delimiter ${LOGFILE} "ostree admin status -v"
|
|
ostree admin status -v >> ${LOGFILE} 2>&1
|
|
|
|
###############################################################################
|
|
# ostree logs for the sysroot and patch feeds
|
|
###############################################################################
|
|
delimiter ${LOGFILE} "ostree log ${OSTREE_REF} --repo=${SYSROOT_REPO}"
|
|
ostree log ${OSTREE_REF} --repo=${SYSROOT_REPO} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
for feed_dir in ${FEED_OSTREE_BASE_DIR}/*/ostree_repo
|
|
do
|
|
sleep ${COLLECT_RUNCMD_DELAY}
|
|
delimiter ${LOGFILE} "ostree log ${OSTREE_REF} --repo=${feed_dir}"
|
|
ostree log ${OSTREE_REF} --repo=${feed_dir} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
done
|
|
|
|
###############################################################################
|
|
# ostree repo summary for the feed ostrees
|
|
###############################################################################
|
|
|
|
for feed_dir in ${FEED_OSTREE_BASE_DIR}/*/ostree_repo
|
|
do
|
|
sleep ${COLLECT_RUNCMD_DELAY}
|
|
delimiter ${LOGFILE} "ostree summary -v --repo=${feed_dir}"
|
|
ostree summary -v --repo=${feed_dir} >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
done
|
|
|
|
exit 0
|