December 9

log all bash shell activity to separate file

log location:
echo 'local6.* /var/log/commands.log' >> /etc/rsyslog.d/bash.conf
add the following rule in the end of /etc/bashrc file:
export PROMPT_COMMAND='RETRN_VAL=$?; logger -p local6.debug "$(whoami) [$$]: $(history 1 | sed "s/^[ ]*[0-9]\\+[ ]*//" ) [$RETRN_VAL]"'

restart the service:

service rsyslog restart

done 🙂

November 9

Cron memo

# ┌────────── minute (0 - 59)
# │ ┌──────── hour (0 - 23)
# │ │ ┌────── day of month (1 - 31)
# │ │ │ ┌──── month (1 - 12)
# │ │ │ │ ┌── day of week (0 - 6 => Sunday - Saturday, or
# │ │ │ │ │ 1 - 7 => Monday - Sunday)
# ↓ ↓ ↓ ↓ ↓
# * * * * * command to be executed
#
# 0 - Sun Sunday
# 1 - Mon Monday
# 2 - Tue Tuesday
# 3 - Wed Wednesday
# 4 - Thu Thursday
# 5 - Fri Friday
# 6 - Sat Saturday
# 7 - Sun Sunday
#

October 12

extremely stupid backup script (files+dbs)

#!/bin/bash

if [[ "$1" != "db_backup" && "$1" != "files_backup" && "$1" != "all" ]]; then
echo -e "Please run the script with following keys:\ndb_backup -> to do backup of all DBs\nfiles_backup -> to do backup of all users without DBs\nall -> to do users and their DBs backups at once"
exit 1
fi
datetime=$(date +%d-%m-%Y_%H-%M-%S)
db_backup_function(){
mkdir -pv /backup/dbs/"$datetime"
for i in `mysql -e "show databases;"|grep -wv "Database\|performance_schema\|information_schema\|sys\|mysql"`;do
mysqldump "$i" >> /backup/dbs/"$datetime"/"$i".sql
if [[ $? -ne 0 ]];then
echo "$datetime DB $i FAILED" >> /backup/failed_DBs.log
fi

zip --quiet /backup/dbs/"$datetime"/"$i".sql.zip /backup/dbs/"$datetime"/"$i".sql
if [[ $? -eq 0 ]];then
rm -f /backup/dbs/"$datetime"/"$i".sql
fi
done
}

files_backup_function(){
mkdir -pv /backup/files/"$datetime"
cd /home/
for i in `ls -1 /home/`;do
zip -r --quiet --symlinks /backup/files/"$datetime"/"$i".zip "$i"
if [[ $? -ne 0 ]];then
echo "$datetime files for $i account FAILED" >> /backup/failed_users.log
fi
done
}

if [[ "$1" = "db_backup" ]];then
db_backup_function
fi

if [[ "$1" = "files_backup" ]];then
files_backup_function
fi

if [[ "$1" = "all" ]];then
db_backup_function
files_backup_function
fi

October 12

Small reminder about commands

A few thread file transfer:

rclone copy :sftp:/disk_folder/disk_hash_id /the_way/to_target_folder/ --sftp-host=target_host_name_or_ip --sftp-user=your_username --sftp-ask-password --sftp-key-use-agent=false --sftp-shell-type=unix --sftp-md5sum-command=md5sum --progress --transfers=1 --multi-thread-streams=6 --checksum --bwlimit=300M

The single thread file transfer with additional validation:
rsync --progress --checksum --checksum-choice=md5 /disk_folder/disk_hash_id your_username@target_host_name_or_ip:/the_way/to_target_folder/

Aggregated itop stat (very helpful for a debug in some cases):

iotop -aoP

Manual checksums for the file:

md5sum /path/to/file

Look deeper at TCP timeouts and waits:

netstat -napo|grep TIME_WAIT

Check disk info:
qemu-img info disk_name

Create disk from command line:
qemu-img create -f qcow2 new_disk_name 15G

Converting/cloning the disk (It should also fix the dependency on the backing file, which means building a standalone VM)
qemu-img convert -O qcow2 old_with_backing_dependency.qcow2 new_standalone.qcow2

Add the backuing file to already existing disk (or rewrite this file):
qemu-img rebase -u -b /way_to_the_backing/file -F qcow2 /way_to_disk/new.qcow2

Creating disk WITH backing file:
qemu-img create -f qcow2 -F qcow2 -b /way_to_the_backing/file new.qcow2 500G

Resize disk:
qemu-img resize disk.qcow2 +10G

Attach the disk to VM:
virsh attach-disk --domain short_VM_ID /way_to_disk/file --target vdx --driver "qemu" --subdriver "qcow2"

Detach the disk from VM:
virsh detach-disk --domain short_VM_ID --target vdx

Check VM details:
virsh dumpxml short_VM_ID

Show all VMs:
virsh list --all

September 11

CXS mount via FUSE and scan

Short reminder

Mount:

sshfs root@server2scan:/var/www/customer_id/ /mnt/customer_id -o IdentityFile=/root/.ssh/scan.key -o allow_other

Scan:

cxs --force --deep --timemax 1000 --filemax 100000 --report /root/scan_results/customer_id.scan_report /mnt/customer_id/

 

August 14

Resize disk in Linux OS:

1. Use lsblk to locate partitions and their IDs:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 8:0 0 40G 0 disk
├─vda1 8:1 0 1M 0 part
└─vda2 8:2 0 40G 0 part /

2. LVM case:

growpart /dev/vda 2
pvresize /dev/vda2
lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
xfs_growfs /dev/mapper/ubuntu--vg-ubuntu--lv

OR, if we use ext4:

resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

3. non-LVM case:

growpart /dev/vda 2
xfs_growfs /

OR, if we use ext4:

resize2fs /

ps. growpart is provided by the following package in RHEL:

cloud-utils-growpart
August 11

Nagios script to check host uptime

#!/bin/bash
#Simple script to check the server uptime in seconds
#written by Vasyl T.
minimum='86400' # 1 day (24 hours) in seconds
real=$(/usr/bin/awk -F'.' '{print $1}' /proc/uptime)

if [[ "$minimum" -gt "$real" ]];then
echo "CRITICAL. Server uptime is $real seconds, when expected - more than: $minimum"
exit 2;
else
days=$(($real / $minimum))
echo "OK. Server uptime is $real seconds or $days day(s)"
exit 0;
fi
August 4

track requested URLs in joomla 3.6 by ChatGPT

Simple code to insert into the index.php file (at the top of it) and see in the log the list of domains requested by the website:

// 🔥 HTTP Debug Trap with backtrace
class MyHttpSniffer {
public $context;

function stream_open($path, $mode, $options, &$opened_path) {
$logFile = __DIR__ . '/http-trap.log';
$timestamp = date('c');

// Захоплюємо бектрейс
ob_start();
debug_print_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS);
$backtrace = ob_get_clean();

// Формуємо лог
$logEntry = "[$timestamp] HTTP REQUEST TO: $path\n";
$logEntry .= "Backtrace:\n$backtrace\n";
$logEntry .= str_repeat("-", 80) . "\n";

file_put_contents($logFile, $logEntry, FILE_APPEND);

// Не відкриваємо — блокуємо виклик
return false;
}

function stream_stat() {}
function stream_read($count) { return false; }
function stream_eof() { return true; }
function stream_seek($offset, $whence) { return false; }
function stream_tell() { return 0; }
}

@stream_wrapper_unregister("http");
@stream_wrapper_unregister("https");
@stream_wrapper_register("http", "MyHttpSniffer");
@stream_wrapper_register("https", "MyHttpSniffer");