Install PIP (Python Package Manager):
$ sudo easy_install pip
Install or upgrade the AWS CLI:
$ pip install awscli --upgrade --user
Add to bin:
$ ln -s /Users/<username>/Library/Python/2.7/bin/aws
/usr/local/bin/aws
Uninstall:
$ pip uninstall awscli
First Run:
https://us-east-2.console.aws.amazon.com/ecs/home?region=us-east-2#/firstRun
Configure AWS:
$ aws configure
>> AWS Access Key ID:
Retrieve the docker login command to auth your docker to your registry:
$ aws ecr get-login --no-include-email --region us-east-2
How to guide: https://torsion.org/borgmatic/docs/how-to/set-up-backups/
Usage with docker image, container name is borgmatic
.
If a previous backup was canceled or some mount error occured, the repository might be locked. Unlock it with:
docker exec borgmatic sh -c "cd && borgmatic borg break-lock"
Show repository information:
docker exec borgmatic sh -c "cd && borgmatic info"
docker exec borgmatic sh -c "cd && borgmatic rinfo"
List database backups in repository:
docker exec borgmatic sh -c "cd && borgmatic list --archive latest --find .borgmatic/*_databases"
Manually trigger backups:
docker exec borgmatic sh -c "cd && borgmatic --stats -v 1 --files 2>&1"
docker exec borgmatic sh -c "cd && borgmatic --config /etc/borgmatic.d/config_File.yaml --stats -v 1 --files 2>&1"
Get versions
$ docker version
$ docker --version
$ docker compose --version
$ docker machine --version
Login to docker, use docker ID (not mail)
$ docker login
$ docker run <params> <what>, (i.e. docker run hello-world)
$ docker run -d -p 80:80 --name webserver nginx
$ docker stop webserver
$ docker start webserver
List all running containers:
$ docker ps
List also stopped containers:
$ docker ps -a
Remove the container (stops running) does not remove image
$ docker rm -f webserver
List all images:
$ docker images
Removes the nginx image:
$ docker rmi nginx
Stop / Remove all containers ⚠️:
$ docker stop $(docker ps -a -q)
$ docker rm $(docker ps -a -q)
Build a docker image (cur dir):
$ docker build -t safe-harbor/testrepo .
Tag the docker image to the repo
$ docker tag safe-harbor/testrepo:latest 188408066687.dkr.ecr.us-east-2.amazonaws.com/safe-harbor/testrepo:latest
Push it to the AWS repo:
$ docker push 188408066687.dkr.ecr.us-east-2.amazonaws.com/safe-harbor/testrepo:latest
Cleanup unused containers, networks, images (unreferenced and dangling), volumes
docker system prune --all --volumes
Cleanup images
docker image prune
docker image prune --all
: Remove all unused images, not just dangling onesCleanup networks
docker network prune
Cleanup volumes
docker volume prune
Start a stash with docker-compose.yaml
in current directory (creates containers, networks, volumes etc.):
$ docker compose up
$ docker compose up -d
Stop stash, clenaup containers and networks (docker-compose.yml
required):
$ docker compose down
$ docker compose down -v
also destroys volumesList running stash (docker-compose.yml
required):
$ docker compose ps
$ docker ps
lists all containers, not only the stashStart up already existing but stopped containers:
$ docker compose start
Stop containers but don't remove anything:
$ docker compose stop
List packages and their states:
dpkg --list
States:
ii
i
= marked for installationi
= successfully installed on systemrc
r
= marked for removalc
= configuration files are currently present in the systemSee installed packages:
dpkg -l | grep postgres
ii postgresql-14 14.7-0ubuntu0.22.04.1 amd64 The World's Most Advanced Open Source Relational Database
ii postgresql-client-14 14.7-0ubuntu0.22.04.1 amd64 front-end programs for PostgreSQL 14
ii postgresql-client-common 238 all manager for multiple PostgreSQL client versions
ii postgresql-common 238 all PostgreSQL database-cluster manager
Remove installed packages:
sudo apt remove postgresql-14 postgresql-client-14 postgresql-client-common postgresql-common
See removed packages which still have configuration files:
dpkg -l | grep postgres
rc postgresql-14 14.7-0ubuntu0.22.04.1 amd64 The World's Most Advanced Open Source Relational Database
rc postgresql-client-common 238 all manager for multiple PostgreSQL client versions
rc postgresql-common 238 all PostgreSQL database-cluster manager
Purge the config storages:
dpkg --purge postgresql-14 postgresql-client-common postgresql-common
Nothing is shown anymore with:
dpkg -l | grep postgres
Installation
$ docker pull docker.elastic.co/elasticsearch/elasticsearch:6.0.1
Run it from command line (Development Mode)
$ docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.0.1
Install X-Pack
$ bin/elasticsearch-plugin install x-pack
To start elasticsearch, run this in main directory
$ cd ~/Project/Elasticsearch
$ bin/elasticsearch
If not starting, try
$ sudo find /Path/to/your/elasticsearch-folder -name ".DS_Store" -depth -exec rm {} \;
Generate Default passwords
$ bin/x-pack/setup-passwords auto
Install X-Pack into Kibana, run in install dir
$ bin/kibana-plugin install x-pack
Add credentials to the kibana.yml file
elasticsearch.username: "kibana"
elasticsearch.password: "<pwd>"
Inpect status of cluster:
$ curl http://127.0.0.1:9200/_cat/health
1472225929 15:38:49 docker-cluster green 2 2 4 2 0 0 0 0 - 100.0%
sudo fdisk -l
Convert a .mov to a .mp4:
$ ffmpeg -i <source.mov> -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" <destination.mp4>
Example:
$ ffmpeg -i 'Bildschirmvideo aufnehmen 2020-10-22 um 17.08.24.mov' -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" output.mp4
Git move repo (old server <-> new server)
TL;DR:
$ git clone --mirror protocol://domain.tld/reponame
$ cd reponame.git
$ git push --mirror protocol://domain.tld/newrepo
Explained:mk
Exactly clone a git repo to another place (move it if you want to call it so):
→ http://blog.plataformatec.com.br/2013/05/how-to-properly-mirror-a-git-repository/
So this:
$ git clone --mirror protocol://domain.tld/reponame
Creates a directory like this:
$ ls
reponame.git
If now we clone the repo with...
$ mkdir reponame-cloned
$ git clone reponame.git reponame-cloned
Klone nach 'reponame-cloned' ...
Fertig.
... we have a cloned repo of the local bare (mirror) remote:
$ cd reponame-cloned
$ git config --get remote.origin.url
/path/to/reponame.git
Let's create a test file...
$ touch test.file
$ git add test.file
... and see the differences to the cloned repo:
$ git diff --cached
diff --git a/test.file b/test.file
new file mode 100644
index 0000000..e69de29
Now we add the file to the cloned repository...
$ git commit -m "test file added"
[master 10e3550] test file added
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 test.file
... and see the differences to the remote mirror:
$ git log origin/master..HEAD
commit 10e3550e399efefe1657db31159035314d2518c6 (HEAD -> master)
Author: Firstname Lastname <firstname.lastname@example.com>
Date: Mon Apr 9 16:52:21 2018 +0200
test file added
We can also do a dry-run and see what would happen if we push it to the remote mirror. This lists the mirror we cloned from, not the origin the mirror was cloned from:
$ git push -n
To /path/to/reponame.git
a2569e6..10e3550 master -> master
Now let's push these changes to the mirror:
$ git push
Zähle Objekte: 2, Fertig.
Delta compression using up to 8 threads.
Komprimiere Objekte: 100% (2/2), Fertig.
Schreibe Objekte: 100% (2/2), 272 bytes | 272.00 KiB/s, Fertig.
Total 2 (delta 1), reused 0 (delta 0)
To /path/to/reponame.git
a2569e6..10e3550 master -> master
Let's see if we received the changes:
$ cd ../geox-tcp.git
git log
commit 10e3550e399efefe1657db31159035314d2518c6 (HEAD -> master)
Author: Some Person <some.person@example.com>
Date: Mon Apr 9 16:52:21 2018 +0200
test file added
commit a2569e6c22b92688b9b7fcf8ce32c2a374decb8e
Merge: b8cb5ad 32b71c0
Author: Other Person <other.person@example.com>
Date: Fri Aug 4 13:14:50 2017 +0200
Merge branch 'somebranch'
Conflicts:
some.file
[...]
So the mirror now has the changes we pushed to him. Let's now remote update the mirror from the very origin repository (v = verbose):
$ git remote -v update
Fordere an von origin
Von protocol://domain.tld/reponame
+ 10e3550...a2569e6 master -> master (Aktualisierung erzwungen)
Now let's see if the revision of the push with the new file still exists:
$ git log
commit a2569e6c22b92688b9b7fcf8ce32c2a374decb8e
Merge: b8cb5ad 32b71c0
Author: Other Person <other.person@example.com>
Date: Fri Aug 4 13:14:50 2017 +0200
Merge branch 'somebranch'
Conflicts:
some.file
[...]
So it's missing, now let's clone the mirror again and see the file missing:
$ mkdir reponame-cloned-again
$ git clone reponame.git reponame-cloned-again
Klone nach 'reponame-cloned-again' ...
Fertig.
$ ls
Conclusion: Mirror will remove everything you did when git remote update
is executed on mirror (it is exactly like his origin again, it's a mirror).
If the origin remote is missing, this error occurs and all revisions in the mirror stay:
$ git remote update
Fordere an von origin
fatal: '/path/to/origin.git' does not appear to be a git > repository
fatal: Konnte nicht vom Remote-Repository lesen.
Bitte stellen Sie sicher, dass die korrekten Zugriffsberechtigungen bestehen
und das Repository existiert.
error: Konnte nicht von origin anfordern
Find a string in multiple files, current & child directories:
$ grep --recursive "SEARCH_STRING" *
Set root password
Get disk list
$ lsblk
sda
^- sda1 … 60gb
Mount disk
$ mkdir /mnt/recover
$ mount /dev/sda1 /mnt/recover
Change password
$ chroot /mnt/recover /bin/bash
$ passwd
$ exit
Cleanup
$ umount /mnt/recover
$ sync
History with timestamp:
HISTTIMEFORMAT="%F %T "
$ history
2027 2019-07-12 13:02:31 sudo apt update
2028 2019-07-12 13:02:33 history
2029 2019-07-12 13:03:35 HISTTIMEFORMAT="%F %T "
$ brew update
$ brew upgrade
Homebrew, get Info about services (i.e. nginx):
$ brew info <service>
Nginx starten:
$ sudo brew services start nginx
Nginx stoppen:
$ sudo brew services stop nginx
if PHP has no intl extension yet:
$ brew install php71-intl
$ brew install php70-intl
$ brew install php56-intl
... or whatever PHP version is running, to see - use:
$ php --version
Show network devices:
$ ip -c link
Show addresses assigned to all network interfaces:
$ ip -c addr
Gateway info
$ ip route
List pods (with IP and node)
$ kubectl get po -o wide
Deprecated:
$ kubectl exec -it <container_name> bash
Short form:
$ kubectl exec -it <container_name> -- bash
If bash fails with OCI runtime exec failed […] executable file not found in $PATH": unknown, try sh:
$ kubectl exec -it <container_name> -- sh
Long form (--stdin/-i: Pass stdin to the container; --tty/-t: Stdin is a TTY):
$ kubectl exec --stdin --tty <container_name> -- bash
After VirtualBox and Vagrant are installed, add a Homestead box:
$ vagrant box add laravel/homestead
List boxes:
$ vagrant box list
Sites / Folders configuration at:
$ edit ~/Homestead/Homestead.yaml
After updating homestad.yaml's "sites" property, reload the nginx config on vm:
$ vagrant reload --provision
Starting the box from the homestead directory:
~/Homestead $ vagrant up
Shutting it down:
$ vagrant halt
Destroing the machine, leaving no traces of starting it up:
$ vagrant destroy --force
Installing Homestead per project instead of globally, use composer:
$ composer require laravel/homestead --dev
and use the make file:
$ php vendor/bin/homestead make
Now start the machine (vagrant up)
Updating homestead, by first updating the vagrant box:
$ vagrant box update
Then update the Homestead source code via git (cloned location):
$ git pull origin master
Or use composer update if installed via composer.
Adding a new site, check the doc to get a list of site types:
https://laravel.com/docs/5.5/homestead#adding-additional-sites
The available site types are: apache, laravel (the default), proxy, silverstripe, statamic, symfony2, and symfony4.
Test if LDAP is working:
$ ldapsearch -x -h 192.168.150.2 -D "ad.read.account@yourdomain.lan" -W -b "DC=yourdomain,DC=lan" -s sub "(sAMAccountName=ad.read.account)" givenName
Most useful options to show only interesting columns:
lsblk -fo NAME,LABEL,FSTYPE,SIZE,TYPE,MODEL,STATE
Example:
NAME LABEL FSTYPE SIZE TYPE MODEL STATE
sda 7.3T disk HGST_HUH721008ALE600 running
|-sda1 1007K part
|-sda2 vfat 512M part
`-sda3 rpool zfs_member 7.3T part
sdb 7.3T disk HGST_HUH721008ALE600 running
|-sdb1 1007K part
|-sdb2 vfat 512M part
`-sdb3 rpool zfs_member 7.3T part
lsblk -o NAME,FSTYPE,FSVER,LABEL,TYPE,SIZE,FSAVAIL,FSUSE%,MOUNTPOINT
Example:
NAME FSTYPE FSVER LABEL TYPE SIZE FSAVAIL FSUSE% MOUNTPOINT
loop0 ext2 1.0 loop 3G
sda disk 7.3T
|-sda1 part 1007K
|-sda2 vfat FAT32 part 512M
`-sda3 zfs_member 5000 rpool part 7.3T
sdb disk 7.3T
|-sdb1 part 1007K
|-sdb2 vfat FAT32 part 512M
`-sdb3 zfs_member 5000 rpool part 7.3T
lsblk -o NAME,STATE,LABEL,TYPE,SIZE,FSAVAIL,FSUSE%,FSTYPE,FSVER,MOUNTPOINT,PATH,PTTYPE,PARTTYPENAME,HOTPLUG,ROTA,VENDOR,MODEL,SERIAL,REV
Example of Raspberry PI with freshly attached, Windows formatted 3TB USB Seagte HDD.
mmcblk0
is the micro SD card with two partitions.
NAME STATE LABEL TYPE SIZE FSAVAIL FSUSE% FSTYPE FSVER MOUNTPOINT PATH PTTYPE PARTTYPENAME HOTPLUG ROTA VENDOR MODEL SERIAL REV
sda running disk 2.7T /dev/sda PMBR 1 1 Seagate ST3000DM001-1E6166 Z1F364M3 0711
mmcblk0 disk 119.3G /dev/mmcblk0 dos 1 0 0x900c530e
|-mmcblk0p1 bootfs part 256M 205.2M 20% vfat FAT32 /boot /dev/mmcblk0p1 dos W95 FAT32 (LBA) 1 0
`-mmcblk0p2 rootfs part 119G 108.6G 3% ext4 1.0 / /dev/mmcblk0p2 dos Linux 1 0
Another example, 500GB 2.5" Western Digital HDD. Formatted to exFAT, partition named backups_srv
in Windows. The shown serial number is printed on the HDD case which makes it simple to find:
NAME STATE LABEL TYPE SIZE FSAVAIL FSUSE% FSTYPE FSVER MOUNTPOINT PATH PTTYPE PARTTYPENAME HOTPLUG ROTA VENDOR MODEL SERIAL REV
sda running disk 465.8G /dev/sda dos 1 1 WD WDC_WD5000BMVV-11GNWS0 WD-WXA1AB0X0193 2005
`-sda1 backups_srv part 465.8G 465.7G 0% exfat 1.0 /media/pg/backups_srv /dev/sda1 dos HPFS/NTFS/exFAT 1 1
mmcblk0 disk 119.3G /dev/mmcblk0 dos 1 0 0x900c530e
|-mmcblk0p1 bootfs part 256M 204.6M 20% vfat FAT32 /boot /dev/mmcblk0p1 dos W95 FAT32 (LBA) 1 0
`-mmcblk0p2 rootfs part 119G 108.3G 3% ext4 1.0 / /dev/mmcblk0p2 dos Linux 1 0
Start up mongo deamon:
$ mongod --dbpath ~/MongoDB/data/db
Start mongo shell:
$ mongo
Move everything up one directory from current directory:
$ mv * .[^.] .??* ..
Fetch MySQL dump from external server:
mysqldump -h <host> -u <username> -p <database> --single-transaction --quick --lock-tables=false --no-tablespaces > db-backup-<database>-$(date +%F).sql
Example:
mysqldump -h 123.123.123.123 -u mysqluser -p mysqlpassword --single-transaction --quick --lock-tables=false --no-tablespaces > db-backup-applicationdb-$(date +%F).sql
List all partitions:
$ diskutil list
List mounts:
$ ll /dev/
Create the NTFS Volume dir
$ sudo mkdir /Volumes/MOUNTNAME
Unmount an already mounted drive:
$ diskutil unmount /dev/diskXsY
Mount a drive:
$ sudo /usr/local/bin/ntfs-3g /dev/diskXsY /Volumes/MOUNTNAME -olocal -oallow_other -ovolname=MOUNTNAME
Uninstall ntfs-3g:
$ brew uninstall ntfs-3g
If original mount tool was replaced, restore apples ntfs mount tool:
$ sudo mv "/Volumes/Macintosh HD/sbin/mount_ntfs.orig" "/Volumes/Macintosh HD/sbin/mount_ntfs"
Show currently installed php versions:
brew ls --versions | ggrep -E 'php(@.*)?\s' | ggrep -oP '(?<=\s)\d\.\d' | uniq | sort
Switch current php version:
$> 7.4
$> 8.0
Copy a PostgreSQL from one server to another:
$ pg_dump -C -h localhost -U localuser dbname |psql -h remotehost -U remoteuser dbname
Copy sourcedb database from host to destdb on other server:
$ pg_dump -C -h localhost -U sourcedbuser sourcedb | psql -h <domain_or_ip> -U destdbuser destdb
Switch to posgres user and open CLI:
su postgres
psql
List databases:
\l
Connect to a database:
\c <database_name>
List tables:
\dt
\dt+
to also how size
and description
Measure query times:
\timing [on|off]
toggle timing of commands\des
- list foreign servers\deu
- list uses mappings\det
- list foreign tables\dtE
- list both local and foreign tables\d <name of foreign table>
- show columns, data types, and other table metadatapipx - https://pypa.github.io/pipx/
List installed applications:
$ pipx list
Install application with pipx automatically in isolated environment. Example is simplemonitor.
$ pipx install simplemonitor
installed package simplemonitor 1.11.0, installed using Python 3.10.2
These apps are now globally available
- simplemonitor
- winmonitor
Start an application with:
$ pipx run APP [ARGS…]
$ pipx run simplemonitor
Update applications:
$ pipx upgrade [APP]
$ pipx upgrade-all
Copy the file "foobar.txt" from local host to remote host:
$ scp foobar.txt your_username@remotehost.edu:/some/remote/directory/
Copy the file "foobar.txt" from remote host to local host:
$ scp your_username@remotehost.edu:foobar.txt /some/local/directory
Copy the file "foobar.txt" from remote host to local host into current directory:
$ scp your_username@remotehost.edu:foobar.txt .
Copy the directory "foo" from local host to remote host's directory "bar":
$ scp -r foo your_username@remotehost.edu:/some/remote/directory/bar
Copy the directory "foo" from remote host to local host:
$ scp -r your_username@remotehost.edu:/some/remote/directory/bar /some/local/directory
Copy the file "foobar.txt" from remote host "rh1.edu" to remote host "rh2.edu"
$ scp your_username@rh1.edu:/some/remote/directory/foobar.txt your_username@rh2.edu:/some/remote/directory/
Copying the files "foo.txt" and "bar.txt" from local host to your home directory on the remote host:
$ scp foo.txt bar.txt your_username@remotehost.edu:~
Copy the file "foobar.txt" from local host to remote host using port 2264:
$ scp -P 2264 foobar.txt your_username@remotehost.edu:/some/remote/directory
Copy multiple files from remote host to your current directory on the local host:
$ scp your_username@remotehost.edu:/some/remote/directory/\{a,b,c\} .
$ scp your_username@remotehost.edu:~/\{foo.txt,bar.txt\} .
Login with forcing password authentication, use to try if force public key works:
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no <user>@<host>
root@123.55.66.77: Permission denied (publickey).
Mount root of a server to ~/mnt/someserver using public key:
sudo sshfs -o allow_other,default_permissions,IdentityFile=~/.ssh/id_rsa root@123.123.123.123:/ ~/mnt/someserver
Dump a SVN repo to be used as a backup & restore dump
/Library/Developer/CommandLineTools/usr/bin/svnrdump dump https://host.tld/svn/project/repository | gzip -9 > dfw-base.dump
Add a file to a tar xz archive:
tar cfvJ <archive.tar.xz> <files>
Add a directory to a tar gz archive:
$ tar cfvz archive.tar.gz /home/directoryToArchive
List all files inside a TarGz:
$ tar -ztvf archive.tar.gz
Extract via GNU tar (recognizes format):
$ tar xfv <archive.tar/.tar.gz/.tar.xz>
Extract compressed archive to current directory:
$ tar -xzvf archive.tgz
Without compression:
$ tar xfv archive.tar
If TimeMachine backup does not work after it was added to the system settings:
Set time machine destination using terminal (has to have full disk access in privacy & security settings).
Commmand:
sudo tmutil setdestination -p "PROTOCOL://USER@IP/FOLDER"
Example:
sudo tmutil setdestination -p "smb://backupserver\\username@nas01/home/hostname_timemachine"
Switch to foobar dir
$ cd ~/.wine/drive_c/"Program Files (x86)"/foobar2000
$ wine foobar2000.exe
Add dock icon:
$ tell application "Terminal"
Visit: https://github.com/ytdl-org/youtube-dl
Install
brew install yt-dlp
brew install ffmpeg
Quick download (.webm
)
yt-dlp <video>
Download in mp4 if available, don't limit to 1080p (if not, choose best format and reencdoe to mp4; possible codec: Google/On2's VP9 Video VP90)
yt-dlp -S res,ext:mp4:m4a --recode mp4 <video>
Get mp3 audio (automatically chooses highest quality)
yt-dlp -x --audio-format mp3 <video>
Start PHP server and utilize zend framework:
Execute from project root
php72 -S 0.0.0.0:8081 -t public/ public/index.php
https://docs.oracle.com/cd/E19253-01/819-5461/gaynp/index.html
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 7.27T 1.15G 7.26T - - 0% 0% 1.00x ONLINE -
$ zpool status -x
all pools are healthy
$ zpool status rpool
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0
errors: No known data errors
Rescue image: FreeBSD 13.0
$ apt-get install zfs-dkms zfsutils-linux
$ zpool import
Zip single file + password protect
zip -e [archive.zip] [file]
Zip folder + password protect
zip -er [archive.zip] [folder]
Unzip:
unzip [archive.zip]
Gzip single file (to gzip a folder, see tar):
gzip -c filename.ext > anotherfile.gz
With progress:
tar cf - [folder] -P | pv -s $(($(du -sk [folder] | awk '{print $1}') * 1024)) | gzip > folder.tar.gz