Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Friday, November 24, 2023

Know Your Linux Hardware Using inxi

On Ubuntu

(base) ashish@ashish:~$ inxi Command 'inxi' not found, but can be installed with: sudo apt install inxi (base) ashish@ashish:~$ inxi CPU: dual core Intel Core i5-4300U (-MT MCP-) speed/min/max: 798/800/2900 MHz Kernel: 6.2.0-37-generic x86_64 Up: 2h 16m Mem: 3127.7/7624.3 MiB (41.0%) Storage: 223.57 GiB (45.9% used) Procs: 241 Shell: Bash inxi: 3.3.13 (base) ashish@ashish:~$ inxi -Fxz System: Kernel: 6.2.0-37-generic x86_64 bits: 64 compiler: N/A Desktop: GNOME 42.9 Distro: Ubuntu 22.04.3 LTS (Jammy Jellyfish) Machine: Type: Laptop System: LENOVO product: 20ARS2C00D v: ThinkPad T440s serial: <superuser required> Mobo: LENOVO model: 20ARS2C00D v: 0B98401 WIN serial: <superuser required> UEFI-[Legacy]: LENOVO v: GJET79WW (2.29 ) date: 09/03/2014 Battery: ID-1: BAT0 charge: 18.0 Wh (98.4%) condition: 18.3/23.2 Wh (79.0%) volts: 12.1 min: 11.1 model: SONY 45N1111 status: Not charging ID-2: BAT1 charge: 1.4 Wh (63.6%) condition: 2.2/23.5 Wh (9.3%) volts: 12.4 min: 11.4 model: LGC 45N1127 status: Charging CPU: Info: dual core model: Intel Core i5-4300U bits: 64 type: MT MCP arch: Haswell rev: 1 cache: L1: 128 KiB L2: 512 KiB L3: 3 MiB Speed (MHz): avg: 1074 high: 1896 min/max: 800/2900 cores: 1: 800 2: 1896 3: 800 4: 800 bogomips: 19953 Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx Graphics: Device-1: Intel Haswell-ULT Integrated Graphics vendor: Lenovo driver: i915 v: kernel bus-ID: 00:02.0 Device-2: Lite-On Integrated Camera type: USB driver: uvcvideo bus-ID: 2-8:2 Display: x11 server: X.Org v: 1.21.1.4 driver: X: loaded: modesetting unloaded: fbdev,vesa gpu: i915 resolution: 1600x900~60Hz OpenGL: renderer: Mesa Intel HD Graphics 4400 (HSW GT2) v: 4.6 Mesa 23.0.4-0ubuntu1~22.04.1 direct render: Yes Audio: Device-1: Intel Haswell-ULT HD Audio vendor: Lenovo driver: snd_hda_intel v: kernel bus-ID: 00:03.0 Device-2: Intel 8 Series HD Audio vendor: Lenovo driver: snd_hda_intel v: kernel bus-ID: 00:1b.0 Sound Server-1: ALSA v: k6.2.0-37-generic running: yes Sound Server-2: PulseAudio v: 15.99.1 running: yes Sound Server-3: PipeWire v: 0.3.48 running: yes Network: Device-1: Intel Ethernet I218-LM vendor: Lenovo ThinkPad X240 driver: e1000e v: kernel port: 3080 bus-ID: 00:19.0 IF: enp0s25 state: down mac: <filter> Device-2: Intel Wireless 7260 driver: iwlwifi v: kernel bus-ID: 03:00.0 IF: wlp3s0 state: up mac: <filter> Drives: Local Storage: total: 223.57 GiB used: 102.66 GiB (45.9%) ID-1: /dev/sda vendor: Western Digital model: WDS240G2G0A-00JH30 size: 223.57 GiB Partition: ID-1: / size: 218.51 GiB used: 102.65 GiB (47.0%) fs: ext4 dev: /dev/sda3 ID-2: /boot/efi size: 512 MiB used: 6.1 MiB (1.2%) fs: vfat dev: /dev/sda2 Swap: ID-1: swap-1 type: file size: 2 GiB used: 0 KiB (0.0%) file: /swapfile Sensors: System Temperatures: cpu: 45.0 C mobo: N/A Fan Speeds (RPM): fan-1: 0 Info: Processes: 240 Uptime: 2h 17m Memory: 7.45 GiB used: 3.09 GiB (41.5%) Init: systemd runlevel: 5 Compilers: gcc: 11.4.0 Packages: 2138 Shell: Bash v: 5.1.16 inxi: 3.3.13 $ free -h total used free shared buff/cache available Mem: 7.4Gi 2.3Gi 1.5Gi 473Mi 3.6Gi 4.4Gi Swap: 2.0Gi 0B 2.0Gi (base) ashish@ashish:~$ --- --- --- --- ---

On Termux on Android Tablet PC by Samsung

$ inxi Use of uninitialized value $sc_freq_max[0] in join or string at /data/data/com.termux/files/usr/bin/inxi line 10313. CPU: 2x 6-core AArch64 (-MT MCP AMP-) speed/min/max: 961/614:1229/2002 MHz Kernel: 4.14.199-27204164-abX205XXU3CWI3 aarch64 Up: 17m Mem: 1.65/2.45 GiB (67.4%) Storage: 15.52 GiB/Total N/A Procs: 2 Shell: Bash inxi: 3.3.31 $ inxi -Fxz System: Kernel: 4.14.199-27204164-abX205XXU3CWI3 arch: aarch64 bits: 64 compiler: N/A Console: pty pts/0 Distro: Android Machine: Type: ARM System: UNISOC T618 Use of uninitialized value in string eq at /data/data/com.termux/files/usr/bin/inxi line 10234. inxi line 10313. CPU: Info: 2x 6-core model: AArch64 bits: 64 type: MT MCP AMP arch: aarch64 rev: 0 Speed (MHz): avg: 961 high: 2002 min/max: 614:1229/2002 boost: disabled cores: 1: 614 2: 614 3: 614 4: 614 5: 614 6: 614 7: 2002 8: 2002 bogomips: 416 Features: Use -f option to see features Graphics: Message: No ARM data found for this feature. Display: server: No display server data found. Headless machine? tty: 80x40 API: N/A Message: No API data available in console. Headless machine? Audio: Message: No ARM data found for this feature. Network: Message: No ARM data found for this feature. Drives: Local Storage: total: 0 KiB used: 15.52 GiB Partition: ID-1: / size: 3.68 GiB used: 3.66 GiB (99.5%) fs: ext4 dev: /dev/dm-4 ID-2: /cache size: 303.1 MiB used: 26 MiB (8.6%) fs: ext4 dev: /dev/mmcblk0p52 Swap: Alert: No swap data was found. Sensors: Src: lm-sensors Missing: Required tool sensors not installed. Check --recommends Info: Processes: 2 Uptime: 18m Memory: total: N/A available: 2.45 GiB used: 1.65 GiB (67.3%) Init: N/A Compilers: N/A Packages: 61 Shell: Bash v: 5.0.18 inxi: 3.3.31 $
Tags: Technology,Linux,

Wednesday, October 26, 2022

Termux to get information about my Android device

Welcome to Termux!

Wiki: https://wiki.termux.com Community forum: https://termux.com/community Gitter chat: https://gitter.im/termux/termux IRC channel: #termux on freenode Working with packages: * Search packages: pkg search [query] * Install a package: pkg install [package] * Upgrade packages: pkg upgrade Subscribing to additional repositories: * Root: pkg install root-repo * Unstable: pkg install unstable-repo * X11: pkg install x11-repo Report issues at https://termux.com/issues

1. Getting OS Info

$ uname Linux $ uname -a Linux localhost 4.14.199-24365169-abX205XXU1AVG1 #2 SMP PREEMPT Tue Jul 5 20:39:23 KST 2022 aarch64 Android

2. Getting Processor Info

$ more /proc/cpuinfo Processor : AArch64 Processor rev 1 (aarch64) processor : 0 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x1 CPU part : 0xd05 CPU revision : 0 processor : 1 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x1 CPU part : 0xd05 CPU revision : 0 processor : 2 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x1 CPU part : 0xd05 CPU revision : 0 processor : 3 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x1 CPU part : 0xd05 CPU revision : 0 processor : 4 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x1 CPU part : 0xd05 CPU revision : 0 processor : 5 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x1 CPU part : 0xd05 CPU revision : 0 processor : 6 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x3 CPU part : 0xd0a CPU revision : 1 processor : 7 BogoMIPS : 52.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x3 CPU part : 0xd0a CPU revision : 1 Hardware : Unisoc ums512 Serial : 96789ab0ffeb70e8d1320621ab4d084fb1082517682936e1977afc5ae63a3c7b

3. Getting my username

$ whoami u0_a218

4. Getting Your IP Address

$ ifconfig Warning: cannot open /proc/net/dev (Permission denied). Limited output. lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 1000 (UNSPEC) wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.102 netmask 255.255.255.0 broadcast 192.168.1.255 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 1000 (UNSPEC)

5. Checking RAM Usage

$ free -h total used free shared buff/cache available Mem: 2.4Gi 1.9Gi 113Mi 12Mi 493Mi 448Mi Swap: 2.5Gi 1.2Gi 1.3Gi

6. Checking Space on Hard Disk

$ df -h Filesystem Size Used Avail Use% Mounted on /dev/block/dm-4 3.2G 3.2G 2.5M 100% / tmpfs 1.2G 1.3M 1.2G 1% /dev tmpfs 1.2G 0 1.2G 0% /mnt /dev/block/dm-1 122M 122M 0 100% /system_ext /dev/block/dm-5 759M 751M 0 100% /vendor /dev/block/dm-6 1.0G 1.0G 0 100% /product /dev/block/dm-7 271M 166M 99M 63% /prism /dev/block/dm-8 31M 408K 30M 2% /optics tmpfs 1.2G 0 1.2G 0% /apex /dev/block/dm-11 1.8M 1.7M 0 100% /apex/com.android.os.statsd@311510000 /dev/block/dm-12 704K 676K 16K 98% /apex/com.android.sdkext@330810010 /dev/block/dm-13 13M 13M 0 100% /apex/com.android.cellbroadcast@330911010 /dev/block/dm-14 15M 15M 0 100% /apex/com.android.permission@330912010 /dev/block/dm-15 7.9M 7.8M 0 100% /apex/com.android.tethering@330911010 /dev/block/dm-16 3.8M 3.7M 0 100% /apex/com.android.resolv@330910000 /dev/block/dm-17 19M 19M 0 100% /apex/com.android.media.swcodec@330443040 /dev/block/dm-18 8.4M 8.4M 0 100% /apex/com.android.mediaprovider@330911040 /dev/block/dm-19 836K 808K 12K 99% /apex/com.android.tzdata@303200001 /dev/block/dm-20 7.2M 7.1M 0 100% /apex/com.android.neuralnetworks@330443000 /dev/block/dm-21 7.8M 7.7M 0 100% /apex/com.android.adbd@330444000 /dev/block/dm-22 4.8M 4.8M 0 100% /apex/com.android.conscrypt@330443020 /dev/block/dm-23 5.6M 5.6M 0 100% /apex/com.android.extservices@330443000 /dev/block/dm-24 748K 720K 16K 98% /apex/com.android.ipsec@330443010 /dev/block/dm-25 5.7M 5.6M 0 100% /apex/com.android.media@330443030 /dev/block/loop21 24M 24M 0 100% /apex/com.android.i18n@1 /dev/block/loop22 5.1M 5.1M 0 100% /apex/com.android.wifi@300000000 /dev/block/loop23 5.0M 5.0M 0 100% /apex/com.android.runtime@1 /dev/block/loop24 236K 72K 160K 32% /apex/com.samsung.android.shell@303013100 /dev/block/loop25 82M 82M 0 100% /apex/com.android.art@1 /dev/block/loop26 232K 92K 136K 41% /apex/com.android.apex.cts.shim@1 /dev/block/loop27 109M 109M 0 100% /apex/com.android.vndk.v30@1 /dev/block/loop28 236K 32K 200K 14% /apex/com.samsung.android.wifi.broadcom@300000000 /dev/block/loop29 236K 32K 200K 14% /apex/com.samsung.android.camera.unihal@301742001 /dev/block/by-name/cache 303M 12M 285M 4% /cache /dev/block/by-name/sec_efs 11M 788K 10M 8% /efs /dev/fuse 22G 8.5G 13G 40% /storage/emulated

7. Print Environment Variables

$ echo $USER $ echo $HOME /data/data/com.termux/files/home

8. Print Working Directory

$ pwd /data/data/com.termux/files/home
Tags: Technology,Android,Linux,

SSH Setup For Accessing Ubuntu From Windows Using SFTP

Getting Basic Info Like Hostname and IP

(base) C:\Users\ashish>hostname CS3L (base) C:\Users\ashish>ipconfig Windows IP Configuration Ethernet adapter Ethernet 2: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : ad.itli.com Ethernet adapter Ethernet: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : ad.itli.com Wireless LAN adapter Wi-Fi: Connection-specific DNS Suffix . : IPv6 Address. . . . . . . . . . . : 2401:4900:47f2:5147:b1b2:6d59:f669:1b96 Temporary IPv6 Address. . . . . . : 2401:4900:47f2:5147:15e3:46:9f5b:8d78 Link-local IPv6 Address . . . . . : fe80::b1b2:6d59:f669:1b96%13 IPv4 Address. . . . . . . . . . . : 192.168.1.100 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : fe80::d837:1aff:fe40:b173%13 192.168.1.1 Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . :

Setting up SSH

(base) C:\Users\ashish>mkdir .ssh (base) C:\Users\ashish>dir Volume in drive C is OSDisk Volume Serial Number is ABCD-PQRS Directory of C:\Users\ashish 10/26/2022 03:25 PM <DIR> . 10/26/2022 03:25 PM <DIR> .. 08/16/2022 01:29 PM <DIR> .3T 09/26/2022 08:04 AM 1,288 .bash_history 06/02/2022 10:15 AM <DIR> .cache 05/30/2022 11:39 AM <DIR> .conda 10/26/2022 02:58 PM 89 .dotty_history 08/19/2022 06:42 PM 68 .gitconfig 10/11/2022 02:03 PM <DIR> .ipython 05/30/2022 10:05 AM <DIR> .jupyter 05/30/2022 12:56 PM <DIR> .keras 08/20/2022 11:55 AM 20 .lesshst 07/04/2022 06:09 PM <DIR> .matplotlib 06/30/2022 10:32 AM <DIR> .ms-ad 10/07/2022 09:00 PM 1,457 .python_history 10/26/2022 03:25 PM <DIR> .ssh 09/06/2022 10:13 PM 2,379 .viminfo 05/30/2022 11:34 AM <DIR> .vscode 05/16/2022 03:19 PM <DIR> 3D Objects 10/07/2022 02:50 PM <DIR> Anaconda3 05/16/2022 03:19 PM <DIR> Contacts 10/26/2022 02:57 PM <DIR> Desktop 10/07/2022 06:27 PM <DIR> Documents 10/26/2022 03:18 PM <DIR> Downloads 05/16/2022 03:19 PM <DIR> Favorites 05/16/2022 03:19 PM <DIR> Links 05/16/2022 03:19 PM <DIR> Music 05/16/2022 02:13 PM <DIR> OneDrive 05/16/2022 03:20 PM <DIR> Pictures 05/16/2022 03:19 PM <DIR> Saved Games 05/16/2022 03:20 PM <DIR> Searches 05/30/2022 09:36 AM <DIR> Videos 6 File(s) 5,301 bytes 26 Dir(s) 81,987,842,048 bytes free (base) C:\Users\ashish>cd .ssh (base) C:\Users\ashish\.ssh>dir Volume in drive C is OSDisk Volume Serial Number is ABCD-PQRS Directory of C:\Users\ashish\.ssh 10/26/2022 03:25 PM <DIR> . 10/26/2022 03:25 PM <DIR> .. 0 File(s) 0 bytes 2 Dir(s) 81,987,903,488 bytes free (base) C:\Users\ashish\.ssh>echo "" > id_rsa (base) C:\Users\ashish\.ssh>dir Volume in drive C is OSDisk Volume Serial Number is ABCD-PQRS Directory of C:\Users\ashish\.ssh 10/26/2022 03:26 PM <DIR> . 10/26/2022 03:26 PM <DIR> .. 10/26/2022 03:26 PM 5 id_rsa 1 File(s) 5 bytes 2 Dir(s) 81,987,678,208 bytes free (base) C:\Users\ashish\.ssh>type id_rsa (base) C:\Users\ashish\.ssh> (base) C:\Users\ashish>ssh-keygen -t rsa -f ./.ssh/id_rsa -P "" Generating public/private rsa key pair. ./.ssh/id_rsa already exists. Overwrite (y/n)? y Your identification has been saved in ./.ssh/id_rsa. Your public key has been saved in ./.ssh/id_rsa.pub. The key fingerprint is: SHA256:fGEZHROeTzogrdXwo7haw0g3eXLVZnO9nM0ZtTbIBh8 itlitli\ashish@CS3L The key's randomart image is: +---[RSA 3072]----+ | oo+E .| | . B=+o +| | . B B=*=o| | . B =.Bo+B| | . S = o .=o| | . + B . | | . = | | o . | | . | +----[SHA256]-----+ (base) C:\Users\ashish>

Note This Error While Doing Setup on Windows

CMD> ssh-copy-id -i ./.ssh/id_rsa.pub ashish@192.168.1.100 'ssh-copy-id' is not recognized as an internal or external command, operable program or batch file.

We overcome this issue by manually copying Public RSA Key into the 'authorized_keys' file of the remote machine using SFTP.

(base) C:\Users\ashish>sftp usage: sftp [-46aCfpqrv] [-B buffer_size] [-b batchfile] [-c cipher] [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-J destination] [-l limit] [-o ssh_option] [-P port] [-R num_requests] [-S program] [-s subsystem | sftp_server] destination

Next Steps of Copying Pubic Key Onto Remote Machine And Vice-versa

Address of Ubuntu System: ashish@192.168.1.151

(base) C:\Users\ashish>sftp ashish@192.168.1.151 The authenticity of host '192.168.1.151 (192.168.1.151)' can't be established. ECDSA key fingerprint is SHA256:2hgOVHHgkrT9/6XnK/KDaFQ0DaXLUoW82eeU6oQyTvQ. Are you sure you want to continue connecting (yes/no/[fingerprint])? Warning: Permanently added '192.168.1.151' (ECDSA) to the list of known hosts. ashish@192.168.1.151's password: Connected to 192.168.1.151. sftp> ls Desktop Documents Downloads Music Pictures Public Templates Videos anaconda3 nltk_data snap sftp> bye

PWD: /home/ashish

sftp> put id_rsa.pub win_auth_key.txt Uploading id_rsa.pub to /home/ashish/win_auth_key.txt id_rsa.pub 100% 593 89.9KB/s 00:00 sftp>

PWD: /home/ashish/.ssh

sftp> get id_rsa.pub ./ubuntu_id_rsa.pub.txt Fetching /home/ashish/.ssh/id_rsa.pub to ./ubuntu_id_rsa.pub.txt /home/ashish/.ssh/id_rsa.pub 100% 573 2.7KB/s 00:00 sftp> sftp> bye

Steps on Ubuntu Machine

(base) ashish@ashishlaptop:~$ cat win_auth_key.txt ssh-rsa AAA***vZs= itli\ashish@CS3L (base) ashish@ashishlaptop:~$

Paste this Public RSA Key in 'authorized_keys' File

(base) ashish@ashishlaptop:~/.ssh$ nano authorized_keys (base) ashish@ashishlaptop:~/.ssh$ cat authorized_keys ssh-rsa AAAA***rzFM= ashish@ashishdesktop ssh-rsa AAAA***GOD0= ashish@ashishlaptop ssh-rsa AAAA***3vZs= itli\ashish@CS3L (base) ashish@ashishlaptop:~/.ssh$

Testing The SSH

Back to Windows 10 System

(base) C:\Users\ashish>ssh ashish@ashishlaptop The authenticity of host 'ashishlaptop (192.168.1.151)' can't be established. ECDSA key fingerprint is SHA256:2hgOVHHgkrT9/6XnK/KDaFQ0DaXLUoW82eeU6oQyTvQ. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'ashishlaptop' (ECDSA) to the list of known hosts. Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-52-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage 2 updates can be applied immediately. To see these additional updates run: apt list --upgradable Last login: Wed Oct 26 13:35:44 2022 from 192.168.1.151 (base) ashish@ashishlaptop:~$ (base) ashish@ashishlaptop:~$ ls anaconda3 Desktop Documents Downloads Music nltk_data Pictures Public snap Templates Videos win_auth_key.txt (base) ashish@ashishlaptop:~$ rm win_auth_key.txt (base) ashish@ashishlaptop:~$ ls anaconda3 Desktop Documents Downloads Music nltk_data Pictures Public snap Templates Videos (base) ashish@ashishlaptop:~$ exit logout Connection to ashishlaptop closed. (base) C:\Users\ashish>ssh ashish@ashishlaptop Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-52-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage 2 updates can be applied immediately. To see these additional updates run: apt list --upgradable Last login: Wed Oct 26 15:46:02 2022 from 192.168.1.100 (base) ashish@ashishlaptop:~$ client_loop: send disconnect: Connection reset (base) C:\Users\ashish>
Tags: Technology,SSH,Linux,Windows CMD,

Saturday, October 15, 2022

MongoDB and Node.js Installation on Ubuntu (Oct 2022)

Part 1: MongoDB

(base) ashish@ashishlaptop:~/Desktop$ sudo apt-get install -y mongodb-org=6.0.2 mongodb-org-database=6.0.2 mongodb-org-server=6.0.2 mongodb-org-mongos=6.0.2 mongodb-org-tools=6.0.2 Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mongodb-org-mongos : Depends: libssl1.1 (>= 1.1.1) but it is not installable mongodb-org-server : Depends: libssl1.1 (>= 1.1.1) but it is not installable E: Unable to correct problems, you have held broken packages. References 1) Install MongoDB (latest) on Ubuntu 2) Install MongoDB (v6.0) on Ubuntu 3) Install MongoDB (v5.0) on Ubuntu

Resolution

Dated: 2022-Oct-14 MongoDb has no official build for ubuntu 22.04 at the moment. Ubuntu 22.04 has upgraded libssl to 3 and does not propose libssl1.1 You can force the installation of libssl1.1 by adding the ubuntu 20.04 source: $ echo "deb http://security.ubuntu.com/ubuntu focal-security main" | sudo tee /etc/apt/sources.list.d/focal-security.list $ sudo apt-get update $ sudo apt-get install libssl1.1 Then use your commands to install mongodb-org. Then delete the focal-security list file you just created: $ sudo rm /etc/apt/sources.list.d/focal-security.list [ Ref ]

Part 2: Node.js

(base) ashish@ashishlaptop:~/Desktop/node$ node Command 'node' not found, but can be installed with: sudo apt install nodejs (base) ashish@ashishlaptop:~/Desktop/node$ sudo apt install nodejs (base) ashish@ashishlaptop:~/Desktop/node$ node -v v12.22.9 (base) ashish@ashishlaptop:~/Desktop/node$ npm Command 'npm' not found, but can be installed with: sudo apt install npm $ sudo apt install npm (base) ashish@ashishlaptop:~/Desktop/node$ npm -v 8.5.1

Issue when MongoDB client is not installed

(base) ashish@ashishlaptop:~/Desktop/node$ node Welcome to Node.js v12.22.9. Type ".help" for more information. > var mongo = require('mongodb'); Uncaught Error: Cannot find module 'mongodb' Require stack: - <repl> at Function.Module._resolveFilename (internal/modules/cjs/loader.js:815:15) at Function.Module._load (internal/modules/cjs/loader.js:667:27) at Module.require (internal/modules/cjs/loader.js:887:19) at require (internal/modules/cjs/helpers.js:74:18) { code: 'MODULE_NOT_FOUND', requireStack: [ '<repl>' ] } > (base) ashish@ashishlaptop:~/Desktop/node$ npm install mongodb added 20 packages, and audited 21 packages in 35s 3 packages are looking for funding run `npm fund` for details found 0 vulnerabilities

After mongoDB client has been installed

> var mongo = require('mongodb'); undefined >
Tags: Technology,Database,JavaScript,Linux,

Thursday, October 13, 2022

SSH Setup (on two Ubuntu machines), Error Messages and Resolution

System 1: ashishlaptop

(base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ ifconfig enp1s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 9c:5a:44:09:35:ee txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 375 bytes 45116 (45.1 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 375 bytes 45116 (45.1 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wlp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.131 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::5154:e768:24e1:aece prefixlen 64 scopeid 0x20<link> inet6 2401:4900:47f6:d7d1:b724:d299:1a51:567 prefixlen 64 scopeid 0x0<global> inet6 2401:4900:47f6:d7d1:239a:fc2d:c994:6e54 prefixlen 64 scopeid 0x0<global> ether b0:fc:36:e5:ad:11 txqueuelen 1000 (Ethernet) RX packets 7899 bytes 8775440 (8.7 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5299 bytes 665165 (665.1 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 (base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ hostname ashish-Lenovo-ideapad-130-15IKB

To Change The Hostname

(base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ sudo nano /etc/hostname (base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ cat /etc/hostname ashishlaptop

System restart required at this point for new hostname to reflect everywhere.

To Setup Addressing of Connected Nodes and Their IP Addresses

Original File Contents

(base) ashish@ashishlaptop:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 ashish-Lenovo-ideapad-130-15IKB # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters

File "/etc/hosts" After Change

(base) ashish@ashishlaptop:~$ sudo nano /etc/hosts (base) ashish@ashishlaptop:~$ cat /etc/hosts 192.168.1.131 ashishlaptop 192.168.1.106 ashishdesktop

Checking Connectivity With The Other Machine

(base) ashish@ashishlaptop:~$ ping 192.168.1.106 PING 192.168.1.106 (192.168.1.106) 56(84) bytes of data. 64 bytes from 192.168.1.106: icmp_seq=1 ttl=64 time=5.51 ms 64 bytes from 192.168.1.106: icmp_seq=2 ttl=64 time=115 ms 64 bytes from 192.168.1.106: icmp_seq=3 ttl=64 time=4.61 ms 64 bytes from 192.168.1.106: icmp_seq=4 ttl=64 time=362 ms 64 bytes from 192.168.1.106: icmp_seq=5 ttl=64 time=179 ms 64 bytes from 192.168.1.106: icmp_seq=6 ttl=64 time=4.53 ms ^C --- 192.168.1.106 ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 5012ms rtt min/avg/max/mdev = 4.525/111.739/361.954/129.976 ms

System 2: ashishdesktop

(base) ashish@ashishdesktop:~$ ifconfig ens33: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 00:e0:4c:3c:16:6b txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 317 bytes 33529 (33.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 317 bytes 33529 (33.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wlx00e02d420fcb: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.106 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 2401:4900:47f6:d7d1:3cc9:20f6:af75:bb28 prefixlen 64 scopeid 0x0<global> inet6 2401:4900:47f6:d7d1:73e6:fca0:4452:382 prefixlen 64 scopeid 0x0<global> inet6 fe80::1cdd:53e7:d13a:4f52 prefixlen 64 scopeid 0x20<link> ether 00:e0:2d:42:0f:cb txqueuelen 1000 (Ethernet) RX packets 42484 bytes 56651709 (56.6 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 28763 bytes 3324595 (3.3 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 (base) ashish@ashishdesktop:~$ hostname ashishdesktop

Original Contents of File "/etc/hosts"

(base) ashish@ashishdesktop:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 ashishdesktop # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters

Modified Contents of "/etc/hosts"

(base) ashish@ashishdesktop:~$ sudo nano /etc/hosts (base) ashish@ashishdesktop:~$ cat /etc/hosts 192.168.1.106 ashishdesktop 192.168.1.131 ashishlaptop

SSH Commands

First: Follow steps 1 to 7 on every node.

1) sudo apt-get install openssh-server openssh-client 2) sudo iptables -A INPUT -p tcp --dport ssh -j ACCEPT 3) Use the network adapter 'NAT' in the Guest OS settings, and create a new port forwarding rule "SSH" for port 22. 4) sudo reboot 5) ssh-keygen -t rsa -f ~/.ssh/id_rsa -P "" 6) sudo service ssh stop 7) sudo service ssh start

Second: After 'First' is done, follow steps 8 to 10 on every node.

8) ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@master 9) ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@slave1 10) ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@slave2

Error Messages And Resolutions

Error 1: Port 22: Connection refused

(base) ashish@ashishlaptop:~$ ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@ashishdesktop /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ashish/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: ERROR: ssh: connect to host ashishdesktop port 22: Connection refused

Resolution

First follow SSH steps 1 to 7 on both the machines.

Error 2: Could not resolve hostname ashishlaptop

(base) ashish@ashishdesktop:~$ ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@ashishlaptop /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ashish/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: ERROR: ssh: Could not resolve hostname ashishlaptop: Temporary failure in name resolution

Resolution

Modify contents of two files "/etc/hostname" and "/etc/hosts" as shown above as the starting activity for this task.
Tags: Technology,Linux,SSH,

Setting up a three node Spark cluster on Ubuntu using VirtualBox (Apr 2020)

Setting hostname in three Guest OS(s):
$ sudo gedit /etc/hostname
    "to master, slave1, and slave2 on different machines"

-----------------------------------------------------------

ON MASTER (Host OS IP: 192.168.1.12):

$ cat /etc/hosts

192.169.1.12	master
192.168.1.3		slave1
192.168.1.4		slave2

Note: Mappings "127.0.0.1  master" and "127.0.1.1  master" should not be there.

$ cd /usr/local/spark/conf
$ sudo gedit slaves

slave1
slave2

$ sudo gedit spark-env.sh

# YOU CAN SET PORTS HERE, IF PORT-USE ISSUE COMES: SPARK_MASTER_PORT=10000 / SPARK_MASTER_WEBUI_PORT=8080
# REMEMBER TO ADD THESE PORTS ALSO IN THE VM SETTING FOR PORT FORWARDING.

export SPARK_WORKER_CORES=2
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/

$ sudo gedit ~/.bashrc
Add the below line at the end of the file.
export SPARK_HOME=/usr/local/spark

Later we will start all the Spark cluster, using the following command:
$ cd /usr/local/spark/sbin
$ source start-all.sh

-----------------------------------------------------------

ON SLAVE2 (Host OS IP: 192.168.1.4):

$ cat /etc/hostname
slave2

$ cat /etc/hosts
192.169.1.12	master
192.168.1.3		slave1
192.168.1.4		slave2

Note: Localhost mappings are removed.
---

$ cd /usr/local/spark/conf
$ sudo gedit spark-env.sh
#Setting SPARK_LOCAL_IP to "192.168.1.4" (the Host OS IP) would be wrong and would result in port failure logs.
SPARK_LOCAL_IP=127.0.0.1

SPARK_MASTER_HOST=192.168.1.12
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/

-----------------------------------------------------------

FOLLOW THE STEPS MENTIONED FOR SLAVE2 ALSO FOR SLAVE1 (Host OS IP: 192.168.1.3)

-----------------------------------------------------------

Configuring Key Based Login

Setup SSH in every node such that they can communicate with one another without any prompt for password.

First: Follow steps 1 to 7 on every node.

1) sudo apt-get install openssh-server openssh-client 2) sudo iptables -A INPUT -p tcp --dport ssh -j ACCEPT 3) Use the network adapter 'NAT' in the Guest OS settings, and create a new port forwarding rule "SSH" for port 22. 4) sudo reboot 5) ssh-keygen -t rsa -f ~/.ssh/id_rsa -P "" 6) sudo service ssh stop 7) sudo service ssh start

Second: After 'First' is done, follow steps 8 to 10 on every node.

8) ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@master 9) ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@slave1 10) ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@slave2 LOGS: (base) ashish@ashish-VirtualBox:~$ ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@ashish-VirtualBox /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ashish/.ssh/id_rsa.pub" The authenticity of host 'ashish-virtualbox (127.0.1.1)' can't be established. ECDSA key fingerprint is SHA256:FfT9M7GMzBA/yv8dw+7hKa91B1D68gLlMCINhbj3mt4. Are you sure you want to continue connecting (yes/no)? y Please type 'yes' or 'no': yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys ashish@ashish-virtualbox's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'ashish@ashish-VirtualBox'" and check to make sure that only the key(s) you wanted were added. ----------------------------------------------------------- FEW SSH COMMANDS: 1) Checking the SSH status: (base) ashish@ashish-VirtualBox:~$ sudo service ssh status [sudo] password for ashish: ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2019-07-24 18:03:50 IST; 1h 1min ago Process: 953 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS) Process: 946 ExecReload=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Process: 797 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 819 (sshd) Tasks: 1 (limit: 4915) CGroup: /system.slice/ssh.service └─819 /usr/sbin/sshd -D Jul 24 18:03:28 ashish-VirtualBox systemd[1]: Starting OpenBSD Secure Shell server... Jul 24 18:03:50 ashish-VirtualBox sshd[819]: Server listening on 0.0.0.0 port 22. Jul 24 18:03:50 ashish-VirtualBox sshd[819]: Server listening on :: port 22. Jul 24 18:03:50 ashish-VirtualBox systemd[1]: Started OpenBSD Secure Shell server. Jul 24 18:04:12 ashish-VirtualBox systemd[1]: Reloading OpenBSD Secure Shell server. Jul 24 18:04:12 ashish-VirtualBox sshd[819]: Received SIGHUP; restarting. Jul 24 18:04:12 ashish-VirtualBox sshd[819]: Server listening on 0.0.0.0 port 22. Jul 24 18:04:12 ashish-VirtualBox sshd[819]: Server listening on :: port 22. 2) (base) ashish@ashish-VirtualBox:/etc/ssh$ sudo gedit ssh_config You can change SSH port here. 3) Use the network adapter 'NAT' in the GuestOS settings, and create a new port forwarding rule "SSH" for port you are mentioning in Step 2. 4.A) sudo service ssh stop 4.B) sudo service ssh start LOGS: ashish@master:~$ ssh-copy-id -i ~/.ssh/id_rsa.pub ashish@slave1 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ashish/.ssh/id_rsa.pub" The authenticity of host 'slave1 (192.168.1.3)' can't be established. ECDSA key fingerprint is SHA256:+GsO1Q6ilqwIYfZLIrBTtt/5HqltZPSjVlI36C+f7ZE. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys ashish@slave1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'ashish@slave1'" and check to make sure that only the key(s) you wanted were added. ashish@master:~$ ssh slave1 Welcome to Ubuntu 19.04 (GNU/Linux 5.0.0-21-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage 99 updates can be installed immediately. 0 of these updates are security updates. ----------------------------------------------------------- LOGS FROM SPARK MASTER ON SUCCESSFUL START: $ cd /usr/local/spark/sbin $ source start-all.sh ashish@master:/usr/local/spark/sbin$ cat /usr/local/spark/logs/spark-ashish-org.apache.spark.deploy.master.Master-1-master.out Spark Command: /usr/lib/jvm/java-8-openjdk-amd64//bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/* -Xmx1g org.apache.spark.deploy.master.Master --host master --port 60000 --webui-port 50000 ======================================== Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 19/08/05 17:32:09 INFO Master: Started daemon with process name: 1664@master 19/08/05 17:32:10 INFO SignalUtils: Registered signal handler for TERM 19/08/05 17:32:10 INFO SignalUtils: Registered signal handler for HUP 19/08/05 17:32:10 INFO SignalUtils: Registered signal handler for INT 19/08/05 17:32:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 19/08/05 17:32:11 INFO SecurityManager: Changing view acls to: ashish 19/08/05 17:32:11 INFO SecurityManager: Changing modify acls to: ashish 19/08/05 17:32:11 INFO SecurityManager: Changing view acls groups to: 19/08/05 17:32:11 INFO SecurityManager: Changing modify acls groups to: 19/08/05 17:32:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ashish); groups with view permissions: Set(); users with modify permissions: Set(ashish); groups with modify permissions: Set() 19/08/05 17:32:13 INFO Utils: Successfully started service 'sparkMaster' on port 60000. 19/08/05 17:32:13 INFO Master: Starting Spark master at spark://master:60000 19/08/05 17:32:13 INFO Master: Running Spark version 2.4.3 19/08/05 17:32:13 INFO Utils: Successfully started service 'MasterUI' on port 50000. 19/08/05 17:32:13 INFO MasterWebUI: Bound MasterWebUI to 127.0.0.1, and started at http://master:50000 19/08/05 17:32:14 INFO Master: I have been elected leader! New state: ALIVE -----------------------------------------------------------
Tags: Technology,Spark,Linux

Spark installation on 3 RHEL based nodes cluster (Issue Resolution in Apr 2020)

Configurations:
  Hostname and IP mappings:
    Check the "/etc/hosts" file by opening it in both NANO and VI.

192.168.1.12 MASTER master
192.168.1.3  SLAVE1 slave1
192.168.1.4  SLAVE2 slave2
  
  Software configuration:
    (base) [admin@SLAVE2 downloads]$ java -version
      openjdk version "1.8.0_181"
      OpenJDK Runtime Environment (build 1.8.0_181-b13)
      OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
    
    (base) [admin@MASTER ~]$ cd /opt/ml/downloads
    (base) [admin@MASTER downloads]$ ls
      Anaconda3-2020.02-Linux-x86_64.sh  
  	    hadoop-3.2.1.tar.gz  
  	    scala-2.13.2.rpm  
  	    spark-3.0.0-preview2-bin-hadoop3.2.tgz 
   
    # Scala can be downloaded from here.
    # Installation command: sudo rpm -i scala-2.13.2.rpm
   
    (base) [admin@MASTER downloads]$ echo JAVA_HOME
      /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre/
  
    PATH: /usr/local/hadoop/etc/hadoop/hadoop-env.sh
      JAVA_HOME ON 'master': /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre/
      JAVA_HOME on 'slave1': /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre

~ ~ ~

In the case of no internet connectivity, installation of 'openssh-server' and 'openssh-client' is not straightforward. These packages have nested dependencies that are hard resolve.

 (base) [admin@SLAVE2 downloads]$ sudo rpm -i openssh-server-8.0p1-4.el8_1.x86_64.rpm
  warning: openssh-server-8.0p1-4.el8_1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
  error: Failed dependencies:
    crypto-policies >= 20180306-1 is needed by openssh-server-8.0p1-4.el8_1.x86_64
    libc.so.6(GLIBC_2.25)(64bit) is needed by openssh-server-8.0p1-4.el8_1.x86_64
    libc.so.6(GLIBC_2.26)(64bit) is needed by openssh-server-8.0p1-4.el8_1.x86_64
    libcrypt.so.1(XCRYPT_2.0)(64bit) is needed by openssh-server-8.0p1-4.el8_1.x86_64
    libcrypto.so.1.1()(64bit) is needed by openssh-server-8.0p1-4.el8_1.x86_64
    libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) is needed by openssh-server-8.0p1-4.el8_1.x86_64
    libcrypto.so.1.1(OPENSSL_1_1_1b)(64bit) is needed by openssh-server-8.0p1-4.el8_1.x86_64
    openssh = 8.0p1-4.el8_1 is needed by openssh-server-8.0p1-4.el8_1.x86_64

~ ~ ~

Doing SSH setup:
  1) sudo iptables -A INPUT -p tcp --dport ssh -j ACCEPT
  2) sudo reboot
  3) ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
  4) ssh-copy-id -i ~/.ssh/id_rsa.pub admin@SLAVE2
  5) ssh-copy-id -i ~/.ssh/id_rsa.pub admin@MASTER
  6) ssh-copy-id -i ~/.ssh/id_rsa.pub admin@SLAVE1

COMMAND FAILURE ON RHEL:
  [admin@MASTER ~]$ sudo service ssh stop
    Redirecting to /bin/systemctl stop ssh.service
    Failed to stop ssh.service: Unit ssh.service not loaded.
    
  [admin@MASTER ~]$ sudo service ssh start
    Redirecting to /bin/systemctl start ssh.service
    Failed to start ssh.service: Unit not found.

Testing of SSH is through: ssh 'admin@SLAVE1'

~ ~ ~

To activate Conda 'base' environment at the start up of system, following code snippet goes at the end of "~/.bashrc" file.

  # >>> conda initialize >>>
  # !! Contents within this block are managed by 'conda init' !!
  __conda_setup="$('/home/admin/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
  if [ $? -eq 0 ]; then
      eval "$__conda_setup"
  else
      if [ -f "/home/admin/anaconda3/etc/profile.d/conda.sh" ]; then
          . "/home/admin/anaconda3/etc/profile.d/conda.sh"
      else
          export PATH="/home/admin/anaconda3/bin:$PATH"
      fi
  fi
  unset __conda_setup
  # conda initialize

~ ~ ~

CHECKING THE OUTPUT OF 'start-dfs.sh' ON MASTER:
 (base) [admin@MASTER sbin]$ ps aux | grep java
   admin     7461 40.5  1.4 6010824 235120 ?      Sl   21:57   0:07 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/bin/java -Dproc_secondarynamenode -Djava.net.preferIPv4Stack=true -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dyarn.log.dir=/usr/local/hadoop/logs -Dyarn.log.file=hadoop-admin-secondarynamenode-MASTER.log -Dyarn.home.dir=/usr/local/hadoop -Dyarn.root.logger=INFO,console -Djava.library.path=/usr/local/hadoop/lib/native -Dhadoop.log.dir=/usr/local/hadoop/logs -Dhadoop.log.file=hadoop-admin-secondarynamenode-MASTER.log -Dhadoop.home.dir=/usr/local/hadoop -Dhadoop.id.str=admin -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml o.a.h.hdfs.server.namenode.SecondaryNameNode
   
   ...

OR
  $ ps -aux | grep java | awk '{print $12}'
    -Dproc_secondarynamenode
    ...

~ ~ ~

CREATING THE 'DATANODE' AND 'NAMENODE' DIRECTORIES:

  (base) [admin@MASTER logs]$ cd ~
  (base) [admin@MASTER ~]$ pwd
      /home/admin
  (base) [admin@MASTER ~]$ cd ..
  (base) [admin@MASTER home]$ sudo mkdir hadoop
  (base) [admin@MASTER home]$ sudo chmod 777 hadoop
  (base) [admin@MASTER home]$ cd hadoop
  (base) [admin@MASTER hadoop]$ sudo mkdir data
  (base) [admin@MASTER hadoop]$ sudo chmod 777 data
  (base) [admin@MASTER hadoop]$ cd data
  (base) [admin@MASTER data]$ sudo mkdir dataNode
  (base) [admin@MASTER data]$ sudo chmod 777 dataNode
  (base) [admin@MASTER data]$ sudo mkdir nameNode
  (base) [admin@MASTER data]$ sudo chmod 777 nameNode
  (base) [admin@MASTER data]$ pwd
      /home/hadoop/data
  (base) [admin@SLAVE1 data]$ sudo chown admin *
  (base) [admin@MASTER data]$ ls -lrt
      total 0
      drwxrwxrwx. 2 admin root 6 Apr 27 22:24 dataNode
      drwxrwxrwx. 2 admin root 6 Apr 27 22:37 nameNode

# Error example with the NameNode execution if 'data/nameNode' folder is not accessible:

File: /usr/local/hadoop/logs/hadoop-admin-namenode-MASTER.log:

2019-10-17 21:45:39,714 WARN o.a.h.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
o.a.h.hdfs.server.common.InconsistentFSStateException: Directory /home/hadoop/data/nameNode is in an inconsistent state: storage directory does not exist or is not accessible.
	...
  at o.a.h.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1692)
	at o.a.h.hdfs.server.namenode.NameNode.main(NameNode.java:1759)
	
# Error example with the DameNode execution if 'data/dataNode' folder is not accessible:

File: /usr/local/hadoop/logs/hadoop-admin-datanode-SLAVE1.log

2019-10-17 22:30:49,302 WARN o.a.h.hdfs.server.datanode.checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/home/hadoop/data/dataNode
java.io.FileNotFoundException: File file:/home/hadoop/data/dataNode does not exist
        ...
2019-10-17 22:30:49,307 ERROR o.a.h.hdfs.server.datanode.DataNode: Exception in secureMain
o.a.h.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
        ...
        at o.a.h.hdfs.server.datanode.DataNode.main(DataNode.java:2924)
2019-10-17 22:30:49,310 INFO o.a.h.util.ExitUtil: Exiting with status 1: o.a.h.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
2019-10-17 22:30:49,335 INFO o.a.h.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at SLAVE1/192.168.1.3
************************************************************/

~ ~ ~

If 'data/dataNode' is not writable by other nodes on the cluster, following failure logs came on SLAVE1:
File: /usr/local/hadoop/logs/hadoop-admin-datanode-MASTER.log

2019-10-17 22:37:33,820 WARN o.a.h.hdfs.server.datanode.checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/home/hadoop/data/dataNode
EPERM: Operation not permitted
        ...
        at java.lang.Thread.run(Thread.java:748)
2019-10-17 22:37:33,825 ERROR o.a.h.hdfs.server.datanode.DataNode: Exception in secureMain
o.a.h.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
        at o.a.h.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
        ...
        at o.a.h.hdfs.server.datanode.DataNode.main(DataNode.java:2924)
2019-10-17 22:37:33,829 INFO o.a.h.util.ExitUtil: Exiting with status 1: o.a.h.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
2019-10-17 22:37:33,838 INFO o.a.h.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at SLAVE1/192.168.1.3
************************************************************/ 

~ ~ ~

Success logs if "DataNode" program comes up successfully on slave machines:

SLAVE1 SUCCESS MESSAGE FOR DATANODE:

	2019-10-17 22:49:47,572 INFO o.a.h.hdfs.server.datanode.DataNode: STARTUP_MSG:
	/************************************************************
	STARTUP_MSG: Starting DataNode
	STARTUP_MSG:   host = SLAVE1/192.168.1.3
	STARTUP_MSG:   args = []
	STARTUP_MSG:   version = 3.2.1
	...
	STARTUP_MSG:   build = https://gitbox.apache.org/repos/asf/hadoop.git -r b3cbbb467e22ea829b3808f4b7b01d07e0bf3842; compiled by 'rohithsharmaks' on 2019-09-10T15:56Z
	STARTUP_MSG:   java = 1.8.0_171
	...
	2019-10-17 22:49:49,489 INFO o.a.h.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
	2019-10-17 22:49:49,543 INFO o.a.h.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:9866
	2019-10-17 22:49:49,549 INFO o.a.h.hdfs.server.datanode.DataNode: Balancing bandwidth is 10485760 bytes/s
	2019-10-17 22:49:49,549 INFO o.a.h.hdfs.server.datanode.DataNode: Number threads for balancing is 50 
	...

ALSO:
	(base) [admin@SLAVE1 logs]$ ps -aux | grep java | awk '{print $12}'
		...
		-Dproc_datanode
		...

MASTER SUCCESS MESSAGE FOR DATANODE:
	(base) [admin@MASTER sbin]$ ps -aux | grep java | awk '{print $12}'
		-Dproc_datanode
		-Dproc_secondarynamenode
		...

~ ~ ~

FAILURE LOGS FROM MASTER FOR ERROR IN NAMENODE:
(base) [admin@MASTER logs]$ cat hadoop-admin-namenode-MASTER.log
	2019-10-17 22:49:56,593 ERROR o.a.h.hdfs.server.namenode.NameNode: Failed to start namenode.
	java.io.IOException: NameNode is not formatted.
			at o.a.h.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:252)
			...
			at o.a.h.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1692)
			at o.a.h.hdfs.server.namenode.NameNode.main(NameNode.java:1759)
	2019-10-17 22:49:56,596 INFO o.a.h.util.ExitUtil: Exiting with status 1: java.io.IOException: NameNode is not formatted.
	2019-10-17 22:49:56,600 INFO o.a.h.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
	/************************************************************
	SHUTDOWN_MSG: Shutting down NameNode at MASTER/192.168.1.12
	************************************************************/ 

FIX:
	Previously: "hadoop namenode -format" 
	On Hadooop 3.X: "hdfs namenode format"

	Hadoop namenode directory contains the fsimage and configuration files that hold the basic information about Hadoop file system such as where is data available, which user created the files, etc.

	If you format the NameNode, then the above information is deleted from NameNode directory which is specified in the "$HADOOP_HOME/etc/hadoop/hdfs-site.xml" as "dfs.namenode.name.dir"

	After formatting you still have the data on the Hadoop, but not the NameNode metadata.

SUCCESS AFTER THE FIX ON MASTER:
	(base) [admin@MASTER sbin]$ ps -aux | grep java | awk '{print $12}'
		-Dproc_namenode
		-Dproc_datanode
		-Dproc_secondarynamenode
		...

~ ~ ~

MOVING ON TO SPARK:
WE HAVE YARN SO WE WILL NOT MAKE USE OF '/usr/local/spark/conf/slaves' FILE.

(base) [admin@MASTER conf]$ cat slaves.template
# A Spark Worker will be started on each of the machines listed below.
... 
		
~ ~ ~

FAILURE LOGS FROM 'spark-submit':
2019-10-17 23:23:03,832 INFO ipc.Client: Retrying connect to server: 192.168.1.12/192.168.1.12:8032. Already tried 0 time(s); maxRetries=45
2019-10-17 23:23:23,836 INFO ipc.Client: Retrying connect to server: 192.168.1.12/192.168.1.12:8032. Already tried 1 time(s); maxRetries=45
2019-10-17 23:23:43,858 INFO ipc.Client: Retrying connect to server: 192.168.1.12/192.168.1.12:8032. Already tried 2 time(s); maxRetries=45 

THE PROBLEM IS IN CONNECTING WITH THE RECOURCE MANAGER AS DESCRIBED IN PROPERTIES FILE YARN-SITE.XML ($HADOOP_HOME/etc/hadoop/yarn-site.xml):
	LOOK FOR THIS: yarn.resourcemanager.address
	FIX: SET IT TO MASTER IP
		
~ ~ ~

SUCCESS LOGS FOR STARTING OF SERVICES AFTER INSTALLATION OF HADOOP AND SPARK:
	(base) [admin@MASTER hadoop/sbin]$ start-all.sh
		Starting namenodes on [master]
		
		Starting datanodes
		master: This system is restricted to authorized users. 
		slave1: This system is restricted to authorized users. 
		
		Starting secondary namenodes [MASTER]
		MASTER: This system is restricted to authorized users. 
		
		Starting resourcemanager
		
		Starting nodemanagers
		master: This system is restricted to authorized users. 
		slave1: This system is restricted to authorized users. 
		
		(base) [admin@MASTER sbin]$

	(base) [admin@MASTER sbin]$ ps aux | grep java | awk '{print $12}'
		-Dproc_namenode
		-Dproc_datanode
		-Dproc_secondarynamenode
		-Dproc_resourcemanager
		-Dproc_nodemanager
		...

ON SLAVE1:
	(base) [admin@SLAVE1 ~]$ ps aux | grep java | awk '{print $12}'
		-Dproc_datanode
		-Dproc_nodemanager
		...

~ ~ ~

FAILURE LOGS FROM SPARK-SUBMIT ON MASTER:
	2019-10-17 23:54:26,189 INFO cluster.YarnScheduler: Adding task set 0.0 with 100 tasks
	2019-10-17 23:54:41,247 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
	2019-10-17 23:54:56,245 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
	2019-10-17 23:55:11,246 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Reason:	Spark master doesn't have any resources allocated to execute the job. Resources like worker node or slave node.
Fix for setup: changes in the /usr/local/hadoop/etc/hadoop/yarn-site.xml
Ref: StackOverflow

~ ~ ~

CONNECTIVITY (OR PORT) RELATED ISSUE INSTANCE 1:
	ISSUE WITH DATANODE ON SLAVE1:
		(base) [admin@SLAVE1 logs]$ pwd
			/usr/local/hadoop/logs
			
		(base) [admin@SLAVE1 logs]$cat hadoop-admin-datanode-SLAVE1.log
		
		(base) [admin@SLAVE1 logs]$
			2019-10-17 22:50:40,384 WARN o.a.h.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.12:9000
			2019-10-17 22:50:46,416 INFO o.a.h.ipc.Client: Retrying connect to server: master/192.168.1.12:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
			
CONNECTIVITY (OR PORT) RELATED ISSUE INSTANCE 2:
	(base) [admin@MASTER logs]$ cat hadoop-admin-nodemanager-MASTER.log
		2019-10-18 00:24:17,473 INFO o.a.h.ipc.Client: Retrying connect to server: MASTER/192.168.1.12:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

FIX: All connectivity between IPs of nodes on the cluster, and bring down the firewall on the nodes on the cluster.
    sudo /sbin/iptables -A INPUT -p tcp -s 192.168.1.12 -j ACCEPT
    sudo /sbin/iptables -A OUTPUT -p tcp -d 192.168.1.12 -j ACCEPT
    sudo /sbin/iptables -A INPUT -p tcp -s 192.168.1.3 -j ACCEPT
    sudo /sbin/iptables -A OUTPUT -p tcp -d 192.168.1.3 -j ACCEPT
    
    sudo systemctl stop iptables
    sudo service firewalld stop

Also, check port (here 80) connectivity as shown below:
1. lsof -i :80
2. netstat -an | grep 80 | grep LISTEN

~ ~ ~

ISSUE IN SPARK-SUBMIT LOGS ON MASTER:
    Exception: Python in worker has different version 2.7 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

FIX IS TO BE DONE ON ALL THE NODES ON THE CLUSTER:
	(base) [admin@SLAVE1 bin]$ ls -lrt /home/admin/anaconda3/bin/python3.7
	-rwx------. 1 admin wheel 12812592 May  6  2019 /home/admin/anaconda3/bin/python3.7

	(base) [admin@MASTER spark]$ pwd
	/usr/local/spark/conf
	
	(base) [admin@MASTER conf]$ ls
	fairscheduler.xml.template  log4j.properties.template  metrics.properties.template  slaves  slaves.template  spark-defaults.conf.template  spark-env.sh.template
	
	(base) [admin@MASTER conf]$ cp spark-env.sh.template spark-env.sh

	PUT THESE PROPERTIES IN THE FILE "/usr/local/spark/conf/spark-env.sh":
		export PYSPARK_PYTHON=/home/admin/anaconda3/bin/python3.7
		export PYSPARK_DRIVER_PYTHON=/home/admin/anaconda3/bin/python3.7

~ ~ ~

ERROR LOGS IF 'EXECUTOR-MEMORY' ARGUMENT OF SPARK-SUBMIT ASKS FOR MORE MEMORY THAN DEFINED IN YARN CONFIGURATION:

FILE INSTANCE 1:
  $HADOOP_HOME: /usr/local/hadoop
  
  (base) [admin@MASTER hadoop]$ vi $HADOOP_HOME/etc/hadoop/yarn-site.xml
  
  <configuration>
    <property>
      <name>yarn.acl.enable</name>
      <value>0</value>
    </property>
    
    <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>192.168.1.12</value>
    </property>
    
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
    </property>
  
    <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>4000</value>
    </property>
    
    <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>8000</value>
    </property>
    
    <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>128</value>
    </property>
    
    <property>
      <name>yarn.nodemanager.vmem-check-enabled</name>
      <value>false</value>
    </property>
  </configuration>

ERROR INSTANCE 1:

	(base) [admin@MASTER sbin]$ ../bin/spark-submit --master yarn --executor-memory 12G ../examples/src/main/python/pi.py 100
	
	2019-10-18 13:59:07,891 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
	2019-10-18 13:59:09,502 INFO spark.SparkContext: Running Spark version 3.0.0-preview2
	2019-10-18 13:59:09,590 INFO resource.ResourceUtils: ==============================================================
	2019-10-18 13:59:09,593 INFO resource.ResourceUtils: Resources for spark.driver:

	2019-10-18 13:59:09,594 INFO resource.ResourceUtils: ==============================================================
	2019-10-18 13:59:09,596 INFO spark.SparkContext: Submitted application: PythonPi
	2019-10-18 13:59:09,729 INFO spark.SecurityManager: Changing view acls to: admin
	2019-10-18 13:59:09,729 IN

	2019-10-18 13:59:13,927 INFO spark.SparkContext: Successfully stopped SparkContext
	Traceback (most recent call last):
	  File "/usr/local/spark/sbin/../examples/src/main/python/pi.py", line 33, in [module]
		.appName("PythonPi")\
	  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 183, in getOrCreate
	  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 370, in getOrCreate
	  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 130, in __init__
	  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 192, in _do_init
	  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 309, in _initialize_context
	  File "/usr/local/spark/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py", line 1554, in __call__
	  File "/usr/local/spark/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 328, in get_return_value
	py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
	: java.lang.IllegalArgumentException: Required executor memory (12288 MB), offHeap memory (0) MB, overhead (1228 MB), and PySpark memory (0 MB) is above the max threshold (4000 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
			...
			at java.lang.Thread.run(Thread.java:748)

	2019-10-18 13:59:14,005 INFO util.ShutdownHookManager: Shutdown hook called
	2019-10-18 13:59:14,007 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-fbead587-b1ae-4e8e-acd4-160e585a6f34
	2019-10-18 13:59:14,012 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-3331bae2-e2d1-47f6-886c-317be6c98339 

FILE INSTANCE 2:

  <configuration>
    <property>
      <name>yarn.acl.enable</name>
      <value>0</value>
    </property>
    
    <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>192.168.1.12</value>
    </property>
    
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
    </property>
    
    <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>12000</value>
    </property>
    
    <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>128</value>
    </property>
  </configuration>
  
ERROR INSTANCE 2:
  (base) [admin@MASTER sbin]$ ../bin/spark-submit --master yarn ../examples/src/main/python/pi.py 100
    py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
    : java.lang.IllegalArgumentException: Required executor memory (12288 MB), offHeap memory (0) MB, overhead (1228 MB), and PySpark memory (0 MB) is above the max threshold (10000 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'. 

Related Articles:
% Getting started with Hadoop on Ubuntu in VirtualBox
% Setting up three node Hadoop cluster on Ubuntu using VirtualBox
% Getting started with Spark on Ubuntu in VirtualBox
% Setting up a three node Spark cluster on Ubuntu using VirtualBox (Apr 2020)
% Notes on setting up Spark with YARN three node cluster
Tags: Technology,Spark,Linux

Sunday, September 25, 2022

Identifying 'Who Am I' on Ubuntu

1. To Get The Processor Information

(base) ashish@ashish-Lenovo-ideapad-130-15IKB:~/Desktop$ more /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 78 model name : Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz stepping : 3 microcode : 0xf0 cpu MHz : 2000.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 22 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs b ts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave a vx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdse ed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple pml bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds mmio_stale_data retbleed bogomips : 3999.93 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 78 model name : Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz stepping : 3 microcode : 0xf0 cpu MHz : 999.644 cache size : 3072 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 2 initial apicid : 2 fpu : yes fpu_exception : yes cpuid level : 22 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs b ts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave a vx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdse ed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple pml bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds mmio_stale_data retbleed bogomips : 3999.93 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 78 model name : Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz stepping : 3 microcode : 0xf0 cpu MHz : 1000.005 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 22 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs b ts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave a vx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdse ed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple pml bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds mmio_stale_data retbleed bogomips : 3999.93 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 78 model name : Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz stepping : 3 microcode : 0xf0 cpu MHz : 2000.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 3 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 22 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs b ts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave a vx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdse ed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities vmx flags : vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple pml bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds mmio_stale_data retbleed bogomips : 3999.93 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: To get the Processor model use the below command in a terminal: $ cat /proc/cpuinfo | grep 'name'| uniq To get the information about number of processors: $ cat /proc/cpuinfo | grep process| wc -l $ cat /proc/cpuinfo | grep 'name'| uniq model name : Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz $ cat /proc/cpuinfo | grep process| wc -l 4

From One More System:

(base) ashish@ashishdesktop:~$ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU 4300 @ 1.80GHz stepping : 2 microcode : 0x5d cpu MHz : 1795.029 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm pti dtherm bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit bogomips : 3590.05 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU 4300 @ 1.80GHz stepping : 2 microcode : 0x5d cpu MHz : 1795.029 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm pti dtherm bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit bogomips : 3590.05 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

2. Getting Your Own Username

(base) ashish@ashishdesktop:~$ whoami ashish

3. Getting Your IP Address on The Network

(base) ashish@ashishdesktop:~$ ifconfig ens33: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 00:e0:4c:3c:16:6b txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 320 bytes 36546 (36.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 320 bytes 36546 (36.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wlx00e02d420fcb: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.106 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::1cdd:53e7:d13a:4f52 prefixlen 64 scopeid 0x20<link> inet6 2401:4900:47f1:ad21:8d1d:6756:3730:f38b prefixlen 64 scopeid 0x0<global> inet6 2401:4900:47f1:ad21:ca26:3ad6:25b6:84af prefixlen 64 scopeid 0x0<global> ether 00:e0:2d:42:0f:cb txqueuelen 1000 (Ethernet) RX packets 5930 bytes 7184630 (7.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4448 bytes 530557 (530.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

4. Getting What Were The Last Thousand Commands or Last Ten Commands by The User

(base) ashish@ashishdesktop:~$ history | head 1 touch stocks_20220202.txt 2 ls 3 ls -l 4 chmod 777 Anaconda3-2021.11-Linux-x86_64.sh 5 ./Anaconda3-2021.11-Linux-x86_64.sh 6 conda install yfinance -c conda-forge 7 conda install pandas_datareader -c conda-forge 8 pip install pandas_datareader 9 curl 10 curl --help (base) ashish@ashishdesktop:~$ history | tail 101 pwd 102 sudo apt install git 103 ls 104 gedit 105 nano 106 cat /proc/cpuinfo 107 whoami 108 ifconfig 109 history | head 110 history | tail

5. Getting a Listing of Most of The Commands Available on The System

(base) ashish@ashishdesktop:~$ help GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu) These shell commands are defined internally. Type `help' to see this list. Type `help name' to find out more about the function `name'. Use `info bash' to find out more about the shell in general. Use `man -k' or `info' to find out more about commands not in this list. A star (*) next to a name means that the command is disabled. job_spec [&] history [-c] [-d offset] [n] or history -anrw [filename] or history> (( expression )) if COMMANDS; then COMMANDS; [ elif COMMANDS; then COMMANDS; ]... [ > . filename [arguments] jobs [-lnprs] [jobspec ...] or jobs -x command [args] : kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill > [ arg... ] let arg [arg ...] [[ expression ]] local [option] name[=value] ... alias [-p] [name[=value] ... ] logout [n] bg [job_spec ...] mapfile [-d delim] [-n count] [-O origin] [-s count] [-t] [-u fd] [> bind [-lpsvPSVX] [-m keymap] [-f filename] [-q name] [-u name] [-r k> popd [-n] [+N | -N] break [n] printf [-v var] format [arguments] builtin [shell-builtin [arg ...]] pushd [-n] [+N | -N | dir] caller [expr] pwd [-LP] case WORD in [PATTERN [| PATTERN]...) COMMANDS ;;]... esac read [-ers] [-a array] [-d delim] [-i text] [-n nchars] [-N nchars]> cd [-L|[-P [-e]] [-@]] [dir] readarray [-d delim] [-n count] [-O origin] [-s count] [-t] [-u fd]> command [-pVv] command [arg ...] readonly [-aAf] [name[=value] ...] or readonly -p compgen [-abcdefgjksuv] [-o option] [-A action] [-G globpat] [-W wor> return [n] complete [-abcdefgjksuv] [-pr] [-DEI] [-o option] [-A action] [-G gl> select NAME [in WORDS ... ;] do COMMANDS; done compopt [-o|+o option] [-DEI] [name ...] set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...] continue [n] shift [n] coproc [NAME] command [redirections] shopt [-pqsu] [-o] [optname ...] declare [-aAfFgiIlnrtux] [-p] [name[=value] ...] source filename [arguments] dirs [-clpv] [+N] [-N] suspend [-f] disown [-h] [-ar] [jobspec ... | pid ...] test [expr] echo [-neE] [arg ...] time [-p] pipeline enable [-a] [-dnps] [-f filename] [name ...] times eval [arg ...] trap [-lp] [[arg] signal_spec ...] exec [-cl] [-a name] [command [argument ...]] [redirection ...] true exit [n] type [-afptP] name [name ...] export [-fn] [name[=value] ...] or export -p typeset [-aAfFgiIlnrtux] [-p] name[=value] ... false ulimit [-SHabcdefiklmnpqrstuvxPT] [limit] fc [-e ename] [-lnr] [first] [last] or fc -s [pat=rep] [command] umask [-p] [-S] [mode] fg [job_spec] unalias [-a] name [name ...] for NAME [in WORDS ... ] ; do COMMANDS; done unset [-f] [-v] [-n] [name ...] for (( exp1; exp2; exp3 )); do COMMANDS; done until COMMANDS; do COMMANDS; done function name { COMMANDS ; } or name () { COMMANDS ; } variables - Names and meanings of some shell variables getopts optstring name [arg ...] wait [-fn] [-p var] [id ...] hash [-lr] [-p pathname] [-dt] [name ...] while COMMANDS; do COMMANDS; done help [-dms] [pattern ...] { COMMANDS ; }

6. Present Working Directory

(base) ashish@ashishdesktop:~/Desktop/moni$ pwd /home/ashish/Desktop/moni

7. Getting Information About The User Through Environment Variables

(base) ashish@ashishdesktop:~/Desktop/moni$ echo $USER ashish (base) ashish@ashishdesktop:~/Desktop/moni$ echo $HOME /home/ashish

8. Getting The Information About The Operating System

8.1: $ uname -a Linux ashishdesktop 5.15.0-41-generic #44-Ubuntu SMP Wed Jun 22 14:20:53 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 8.2: $ uname Linux 8.3: $ hostnamectl Static hostname: ashishlaptop Icon name: computer-laptop Chassis: laptop Machine ID: 67709590ff664196b5c2eed56b83eb45 Boot ID: 1741f166cc4b4adda434ed6df857e9d2 Operating System: Ubuntu 22.04.1 LTS Kernel: Linux 5.15.0-52-generic Architecture: x86-64 Hardware Vendor: Lenovo Hardware Model: Lenovo ideapad 130-15IKB 8.4: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.1 LTS Release: 22.04 Codename: jammy

Note: Commands 8.3 and 8.4 won't work in Termux on Android device.

9. Find out what shell I am using on Ubuntu

Please note that $SHELL is the shell for the current user but not necessarily the shell that is running at the moment. Try the following examples: (base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ echo $SHELL /bin/bash (base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ sh $ echo $SHELL /bin/bash $ ps -p $$ PID TTY TIME CMD 24437 pts/1 00:00:00 sh $

Errors You Will Notice When Some of The Commonly Known Shells Are Not Available on Your System

(base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ ksh Command 'ksh' not found, but can be installed with: sudo apt install ksh93u+m # version 1.0.0~beta.2-1, or sudo apt install mksh # version 59c-16 (base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ csh Command 'csh' not found, but can be installed with: sudo apt install csh # version 20110502-7, or sudo apt install tcsh # version 6.21.00-1.1 (base) ashish@ashish-Lenovo-ideapad-130-15IKB:~$ tcsh Command 'tcsh' not found, but can be installed with: sudo apt install tcsh

How do I check which shell am I using?

Use the ps command with -p {pid} option. The following command selects the processes whose process ID numbers appear in pid. Use the following command to find out which shell you are in: $ ps -p $$ $ ps -p $$ PID TTY TIME CMD 24437 pts/1 00:00:00 sh $

10. Getting Your RAM Information

Human Readable

$ free -h total used free shared buff/cache available Mem: 11Gi 8.5Gi 262Mi 850Mi 2.8Gi 1.9Gi Swap: 2.0Gi 5.0Mi 2.0Gi

In MBs

$ free -m total used free shared buff/cache available Mem: 11835 8709 263 850 2862 1973 Swap: 2047 5 2042

In GBs

$ free -g total used free shared buff/cache available Mem: 11 8 0 0 2 1 Swap: 1 0 1

11. Space Taken by Various Directories in The Current Directory

(base) ashish@ashish-Lenovo-ideapad-130-15IKB:~/Desktop/ws/gh/public/pubML$ du 548 ./e4_stock_market_price_prediction/.ipynb_checkpoints 264 ./e4_stock_market_price_prediction/files_input/infy 788 ./e4_stock_market_price_prediction/files_input/nifty50 1056 ./e4_stock_market_price_prediction/files_input 2152 ./e4_stock_market_price_prediction 12 ./e2_sentiment_analysis_on_stock_market_data/files 28 ./e2_sentiment_analysis_on_stock_market_data/.ipynb_checkpoints 636 ./e2_sentiment_analysis_on_stock_market_data 6804 ./e5_sentiment_analysis_using_rnn_lstm_and_bidirectional_lstm/input/sentences_and_phrases_150k 13384 ./e5_sentiment_analysis_using_rnn_lstm_and_bidirectional_lstm/input 18592 ./e5_sentiment_analysis_using_rnn_lstm_and_bidirectional_lstm 148 ./Apriori Algorithm for Association Analysis/Two Column Format 548 ./Apriori Algorithm for Association Analysis 3596 ./weka/e2_coalindia_linear_regression/screenshots 3656 ./weka/e2_coalindia_linear_regression 56 ./weka/e1_boston_housing 3716 ./weka 65660 ./e8_bot_detection_on_twitter/input 238288 ./e8_bot_detection_on_twitter 952 ./e1_bengaluru_housing 608 ./e6_peformance testing of Sentence Transformers for sentence encoding 7024 ./e3_Prediction_of_Nifty50_index_using_LSTM_based_model/files_1/models/p5 7028 ./e3_Prediction_of_Nifty50_index_using_LSTM_based_model/files_1/models 7032 ./e3_Prediction_of_Nifty50_index_using_LSTM_based_model/files_1 136 ./e3_Prediction_of_Nifty50_index_using_LSTM_based_model/.ipynb_checkpoints 788 ./e3_Prediction_of_Nifty50_index_using_LSTM_based_model/files_input 8092 ./e3_Prediction_of_Nifty50_index_using_LSTM_based_model 212 ./e7_using twitter api to fetch trending topics, tweets and user posting them 381724 . (base) ashish@ashish-Lenovo-ideapad-130-15IKB:~/Desktop/ws/gh/public/pubML$ du -shx * | sort -rh | head -10 233M e8_bot_detection_on_twitter 19M e5_sentiment_analysis_using_rnn_lstm_and_bidirectional_lstm 8.0M e3_Prediction_of_Nifty50_index_using_LSTM_based_model 3.7M weka 2.2M e4_stock_market_price_prediction 952K e1_bengaluru_housing 636K e2_sentiment_analysis_on_stock_market_data 608K e6_peformance testing of Sentence Transformers for sentence encoding 548K Apriori Algorithm for Association Analysis 212K e7_using twitter api to fetch trending topics, tweets and user posting them

12. Check Hard Disk Usage

$ df Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 1211936 2140 1209796 1% /run This is your hard disk. /dev/sda2 959786032 42308764 868649060 5% / tmpfs 6059676 23156 6036520 1% /dev/shm tmpfs 5120 4 5116 1% /run/lock /dev/sda1 523244 5364 517880 2% /boot/efi tmpfs 1211932 4752 1207180 1% /run/user/1000 This is your external storage device. /dev/sdb1 999743488 636278784 363464704 64% /media/ashish/6137-6435 $ df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.2G 2.1M 1.2G 1% /run /dev/sda2 916G 41G 829G 5% / tmpfs 5.8G 23M 5.8G 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock /dev/sda1 511M 5.3M 506M 2% /boot/efi tmpfs 1.2G 4.7M 1.2G 1% /run/user/1000 /dev/sdb1 954G 607G 347G 64% /media/ashish/6137-6435
Tags: Technology,Linux

Thursday, September 22, 2022

Technology Listing Related to Ubuntu Software House (Sep 2022)

1. GNU Image Manipulation Program (GIMP)

2. LibreOffice Suite
3. LibreOffice Software Listing
4. Mozilla Firefox
5. PyCharm IDE (Professional Edition)
6. qBitTorrent (An open source Torrent client)
7. Tor Browser For bypassing network firewall of a private network and bypassing restricted browsing setting of your ISP (Internet Service Provider).
8. VLC Media Player
9. Jami: Video Conferencing Application All Platforms What does Jami mean? Ans: The choice of the name Jami was inspired by the Swahili word 'jami', which means 'community' as a noun and 'together' as an adverd.
Tags: Technology,Linux,