July 17

fix cPanel Solr missed indexes

It is terrible when you recognize that your Solr is not working as it should because it doesn’t have any indexes, nothing…

Okay, if it is not working in the native way, let’s help him by rebuilding the indexes during the night via very simple loop:

for i in `uapi --user=your_cpanel_account Email list_pops_with_disk|grep -w "login:"|awk '{print $2}'`;do doveadm search -u "$i" text "jnk3d9wl2la2jkdsp0p2edscgsdkdsopek";done

Yep, it will go to each email account and search for all emails inside.
I am still not sure why Solr won’t do it as it should in the cPanel build, but okay, we can handle it this way.

related logs:

Solr log:  /home/cpanelsolr/server/logs/solr.log

Dovecot log: /var/log/maillog

Related scripts:

Install Solr from the command line: /scripts/install_dovecot_fts

Rebuild Solr indexes (but it is not working): /scripts/rescan_user_dovecot_fts

Bonus: step by step fix for the error Error: fts_solr: Lookup failed: 500 Server Error

/usr/local/cpanel/3rdparty/scripts/cpanel_dovecot_solr_rebuild_index
/scripts/restartsrv_cpanel_dovecot_solr
/scripts/rescan_user_dovecot_fts

 

July 15

resize qcow2 with windows on board from the command line. Yes, i did it

#Перевіряємо чи нічого там не живе

fdisk -l /dev/nbd0
fdisk -l /dev/nbd1

#Перевіряємо інфу по старому диску, особлива увага їбучому backing file:

qemu-img info old.qcow2

#Якщо він є, то вказуємо шлях до нього – то це срака, і спершу його потрібно завезти та покласти, а потім:

mkdir -p /var/lib/libvirt/images/
qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/06xysn6-4379-489c-9d47-833492d0aaf1 new.qcow2 200G

#Якщо його нема:

qemu-img create -f qcow2 new.qcow2 200G

#Підʼєднуємо обидва диски

qemu-nbd --connect=/dev/nbd0 old.qcow2
qemu-nbd --connect=/dev/nbd1 new.qcow2

#перевіряємо, чи бачимо ми розділи на сорсі – нас дуже цікавлять закінчення та початок кожного розділу

fdisk -l /dev/nbd0

Нас дуже цікавлять початки на завершення розділів (сектори)

Device Boot Start End Sectors Size Id Type
/dev/nbd0p1 * 2048 1126399 1124352 549M 7 HPFS/NTFS/exFAT
/dev/nbd0p2 1126400 414511103 413384704 197.1G 7 HPFS/NTFS/exFAT

#пиздимо звідти таблицю розділів у файлик:

sfdisk -d /dev/nbd0 > partition_table.txt

#дивимося скільки секторів у нового диску:

fdisk -l /dev/nbd1|grep -w Disk

#Disk /dev/nbd1: 200 GiB, 214748364800 bytes, 419430400 sectors

#редагуємо таблицю розділів. потрібно змінити :

nano partition_table.txt

#Розкатуємо таблицю розділів на новий диск:

sfdisk /dev/nbd1 < partition_table.txt

———————-Вихлоп—————————-

root@nodes:/the_way/# sfdisk /dev/nbd1 < partition_table.txt
Checking that no-one is using this disk right now ... OK

Disk /dev/nbd1: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xec93683b

Old situation:

Device Boot Start End Sectors Size Id Type
/dev/nbd1p1 * 2048 1126399 1124352 549M 7 HPFS/NTFS/exFAT
/dev/nbd1p2 1126400 62912511 61786112 29.5G 7 HPFS/NTFS/exFAT

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS (MBR) disklabel with disk identifier 0xec93683b.
/dev/nbd1p1: Created a new partition 1 of type 'HPFS/NTFS/exFAT' and of size 549 MiB.
Partition #1 contains a ntfs signature.
/dev/nbd1p2: Created a new partition 2 of type 'HPFS/NTFS/exFAT' and of size 197.1 GiB.
Partition #2 contains a ntfs signature.
/dev/nbd1p3: Done.

New situation:
Disklabel type: dos
Disk identifier: 0xec93683b

Device Boot Start End Sectors Size Id Type
/dev/nbd1p1 * 2048 1126399 1124352 549M 7 HPFS/NTFS/exFAT
/dev/nbd1p2 1126400 414511103 413384704 197.1G 7 HPFS/NTFS/exFAT

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

———————-^Вихлоп^—————————-

#Створюємо карту розділів для нового диску:

#ДДшкаємо або ntfsclone розділ за розділом – менший розмір блоку потрібен щоб воно не уїбунькало хост систему
##там де розділи службові – ддшкаємо
##там де точно NTFS – використувуємо ntfsclone !!!!! розділ куди і звідки йдуть поміняні тобто ntfsclone –overwrite /dev/TARGET /dev/SOURCE –force
#і це доволі не очевидна хрінь ^^^^^
#чому не ддшкою – бо воно “зробить” файл диску “важким” – все щедро забивши нулями

dd if=/dev/nbd0p1 of=/dev/nbd1p1 bs=1M status=progress
ntfsclone --overwrite /dev/nbd1p2 /dev/nbd0p2 --force

….
———————-Вихлоп—————————-

root@nodes:/the_way/# ntfsclone --overwrite /dev/nbd1p2 /dev/nbd0p2 --force
ntfsclone v2022.10.3 (libntfs-3g)
NTFS volume version: 3.1
Cluster size : 4096 bytes
Current volume size: 211652964352 bytes (211653 MB)
Current device size: 211652968448 bytes (211653 MB)
Scanning volume ...
100.00 percent completed
Accounting clusters ...
Space in use : 27538 MB (13.0%)
Cloning NTFS ...
100.00 percent completed
Syncing ...
root@nodes:/the_way/#

———————-^Вихлоп^—————————-

#Розбираємо все назад

qemu-nbd --disconnect /dev/nbd1
qemu-nbd --disconnect /dev/nbd0

Все – новий диск готовий до використання, можемо підключати і пробувати

July 15

Nginx Gzip Settings

Not ideal, but not bad either

## Gzip Settings ##
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 5;
gzip_disable "msie6";
gzip_min_length 256;
gzip_proxied any;
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-javascript
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/javascript
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy
text/x-js
text/xml;
gzip_vary on;

# Security Headers
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;

 

July 15

cPanel Calendar delegation

This way is not mentioned in their API, but it works pretty well:

uapi --user=username CPDAVD add_delegate delegator='whoshares@domain.com' delegatee="whoreceive@domain.com" calendar='addressbook-ID-here' calname='address book name' readonly=1
July 15

check CRT and KEY pair

Simple check from the command line if we have a valid pair:

openssl x509 -noout -modulus -in /home/server.crt | openssl md5
(stdin)= 5f39967e4c06f71c0b00336a8317ddd6
openssl rsa -noout -modulus -in /home/server.key | openssl md5
(stdin)= 5f39967e4c06f71c0b00336a8317ddd6

Hashes should be the same

July 15

Very old SVN and SSL storage reminder

Створити пароль у вигляді "OBF:1vn21ugu1saj1v9i1v941sar1ugw1vo0"
java -cp /home/svn/csvn/appserver/lib/jetty-util-8.1.16.v20140903.jar org.eclipse.jetty.util.security.Password 4C4jnsIh8a25gojz4a5l

де 4C4jnsIh8a25gojz4a5l - це пароль
а його хеш: OBF:1kqv1cwd1iky1kfp1z031wug1pi11u2o1apu1i9a1i6o1apo1u2m1pk51wty1z0r1kcp1ing1cvl1ktz
openssl pkcs12 -export \
-in /home/svn/csvn/appserver/etc/server.crt \
-inkey /home/svn/csvn/appserver/etc/server.key \
-out bundle.p12 \
-name svnedge \
-passout pass:4C4jnsIh8a25gojz4a5l


keytool -importkeystore \
-destkeystore /home/svn/csvn/appserver/etc/svnedge.jks \
-srckeystore bundle.p12 \
-srcstoretype PKCS12 \
-alias svnedge \
-deststorepass 4C4jnsIh8a25gojz4a5l \
-destkeypass 4C4jnsIh8a25gojz4a5l \
-srcstorepass 4C4jnsIh8a25gojz4a5l

keytool -list -keystore /home/svn/csvn/appserver/etc/svnedge.jks -storepass 4C4jnsIh8a25gojz4a5l

July 15

Test connection to SMTP via telnet (no TLS)

Creating access details:

echo -n 'someaddress@somedomain.com' | base64 # c29tZWFkZHJlc3NAc29tZWRvbWFpbi5jb20=
echo -n 'faLYu4HFwRf6a' | base64 # ZmFMWXU0SEZ3UmY2YQ==

connection example:

telnet localhost 587
EHLO test.local
AUTH LOGIN
334 VXNlcm5hbWU6
#я
c29tZWFkZHJlc3NAc29tZWRvbWFpbi5jb20= ← це base64 від "someaddress@somedomain.com"
#мені
334 UGFzc3dvcmQ6
#я
ZmFMWXU0SEZ3UmY2YQ== ← це base64 від "faLYu4HFwRf6a"
#мені
235 Authentication succeeded

Sending test email:

MAIL FROM:<someaddress2@somedomain2.com>
RCPT TO:<someadress@somedomain.com>
DATA
Subject: test mail

This is a test.
.
QUIT

 

July 15

Very simple and stupid script to check VM ram and CPU using virsh

#!/bin/bash
for i in $(virsh list --all|grep VM|awk '{print $2}');do
cores=$(virsh dumpxml "$i"|grep -w vcpu|awk -F'>' '{print $2}'|awk -F'<' '{print $1}')
ram_kb=$(virsh dumpxml "$i"|grep "memory unit"|awk -F'>' '{print $2}'|awk -F'<' '{print $1}')
ram=$((ram_kb / 1024 / 1024))

echo -e "-----> $i info:\nCPU: $cores\nRAM: $ram GB"
for disks_list in $(virsh dumpxml "$i"|grep "/var/lib/libvirt/images/"|awk -F"'" '{print $2}');do
image_size=$(ls -alh "$disks_list"|awk '{print $5}')
echo -e "DISK: $image_size\nDisk location: $disks_list"
done
echo -e "\n\n"
done
July 15

simple Nginx proxy with backup connection to backend

proxy-side config:

upstream itday.org.ua {
# main server, we are receiving requests here when it is alive
server your_awesome_IP1_here:8891 max_fails=3 fail_timeout=30s;

# backup server, we will forward traffic to it, when the first one is died
server your_awesome_IP2_here:8891 backup;
}

server {
listen your_awesome_PROXY_ip:80;
server_name itday.org.ua www.itday.org.ua;

# fix to make possible handle letsencrypt for this domain
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt; # Certbot store challenge files here
}

location / {
proxy_pass https://itday.org.ua;

# basic headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;

# Keepalive
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}

 

I am using Nginx to Nginx connection, so I need to tell my backend the proxy IP, in order to see the real IPs in the logs, I did it via a simple file include inside the http section:

include /etc/nginx/backend_real_ips.conf;

The backend_real_ips.conf file content:

#
set_real_ip_from your_awesome_PROXY_ip; # Our super proxy
real_ip_header X-Real-IP; # We will take IP from this header
real_ip_recursive on; # We can handle a few IPs, if we see some chain here, why not