Saturday, July 22, 2023

Crystal Disk Info (version 9.x) not detecting your SSDs in a Windows (Server) to Go Installation

No, your disk/controller may not be malfunctioning/faulty!!!! 

Seems that Crystal Disk Info (https://crystalmark.info/) is not detecting my NVMe SSD in a Windows Server 2 Go installation in a USB enclosure (https://www.amazon.com/dp/B07MNFH1PX)


Crystal Disk Info (version 9.1.0 x64) does NOT detect my SSD in a Windows 2 Go Installation.




Uninstalling the application and installing an 8.17.x version worked.

et. voila!!! Crystal Disk Info (version 8.17.13 x64) DOES detect my SSD in a Windows 2 Go Installation.




Versions that worked for me:

- CrystalDiskInfo8_16_4

- CrystalDiskInfo8_17_13


Crystal Disk Info (version 8.16.4 x64) detects my SSD in a Windows 2 Go Installation. Notice that this is another NVMe SSD make/model.


Seems that version 9.x.x which is fairly recent has some features missing or a bug.



2023-07-22


Sunday, June 25, 2023

SOPHOS UTM9 Home Edition, Server 2016, HTML5 VPN and "Error: Protocol Security Negotiation Failure" error HTML5 connection over UserPortal.

 





https://community.sophos.com/sophos-xg-firewall/f/discussions/86373/user-portal-rdp-connection-protocol-security-negotiation-failure

https://community.sophos.com/utm-firewall/f/vpn-site-to-site-and-remote-access/75897/solved-i-get-the-following-error-when-i-try-to-connect-to-my-server-over-html5-portal-error-protocol-security-negotiation-failure



Changing the Protocol Security: RDP to TLS

https://community.sophos.com/utm-firewall/f/vpn-site-to-site-and-remote-access/78192/error-protocol-security-negotiation-failure-error-html5-connection-over-userportal



Sunday, June 11, 2023

Launching Firefox (with a snapd installation) in Lubuntu to an xdisplay server using Putty, X11 forwarding and Xming

Launching Firefox from a snapd installation in Lubuntu to an xdisplay server using Putty, X11 forwarding and Xming


The purpose of this article is to record and summarise the process to setup X11 forwarding with Putty and a Lubuntu installation so that a GUI based application can be executed on the same system as the putty client. Most setups worked for native usr binaries such as xclock, xeyes (located in the /usr/bin/xclock) directory but for applications located in other directories such as /snap/bin the following error is shown below: 

PuTTY X11 proxy: Unsupported authorisation protocol

Error: cannot open display: [PCNAME]:10.0

Copying the .Xauthority file to the root home directory did not work.
But launching with the XAUTHORITY=$HOME/.Xauthority /path/to/binary-file command worked.

e.g. XAUTHORITY=$HOME/.Xauthority /snap/bin/firefox


Simple rough steps:


Install Lubuntu


Install OpenSSH

apt install openssh-server -y

systemctl enable ssh


Allow port 22 

sudo ufw allow ssh

sudo ufw enable


Download, install and launch Xming and fonts (fonts optional)

https://sourceforge.net/projects/xming/files/Xming/6.9.0.31/

https://sourceforge.net/projects/xming/files/Xming-fonts/7.7.0.10/




Configure Putty session with X11 forwarding 

Before -->
After -->



Webpage URL with the full details: https://appuals.com/putty-x11-proxy-error/


Configure sshd_config as follows

sudo vi /etc/ssh/sshd_config

Make the following changes. Uncomment as necessary as save.


X11Forwarding yes

X11DisplayOffset 10

X11UseLocalhost no

PrintMotd no

TCPKeepAlive yes


NB: DO NOT add the following entry in the sshd_config file as the ssh server will report an error and may not start.

ForwardX11Trusted yes    <-- DO NOT ADD IN FILE


Restart services as required

sudo systemctl restart ssh.service



Forum Post on X11 forwarding and launching /snap/bin/firefox 

https://www.reddit.com/r/linuxquestions/comments/wypz4v/x11_forwarding_question/


 Never mind, I figured it out.

22.04 installs firefox as a snap by default and that does not correctly handle xauth authentication. I installed a non-snap version and it now works as expected. It also works with the snap version if I run

XAUTHORITY=$HOME/.Xauthority /snap/bin/firefox





Additional troubleshooting steps to determine xauth is working

https://stackoverflow.com/questions/46277419/x11-proxy-unsupported-and-wrong



Launching another instance of X

https://unix.stackexchange.com/questions/85383/how-to-start-a-second-x-session

startx -- :1
NB: To be done in a console not in a putty/TTY session.

How to find the list of GUI applications in a Linux installation
https://askubuntu.com/questions/1091235/how-to-get-the-list-of-all-application-installed-which-has-gui

Naresh
20230611










Sunday, October 10, 2021

Efficiency is Brittle!

Efficiency is brittle!

This was messaged to me by a colleague of mine in the UK on Monday 4th October 2021. It only occurred to me that he was referring to the Facebook outage the same day.

For me it was Whatapp, and the down detector was reporting tens of thousands of connectivity issues. It wasn't just WhatsApp it was the entire Facebook network and services that were down.


How did this happen?


So after hours of lost business, dropped stock prices and networth, The Facebook network was back on track.

So the billion-dollar question is how did this happen?

There was initially some speculation of a cyberattack but a tweet from Cloudflare John Graham-Cumming, said that it was most likely a BGP update issue. and even Facebook sent a message about it.

Other articles also said it was BGP misconfiguration that caused the outage. So how can such a simple protocol do so much damage?


The Border Gateway Protocol (BGP)

I am not going into too many details of BGP here, as this article explains it nicely as well as this YouTube video (please read up on your subnetting to get a better idea). Large corporations and universities do possess and control BGP routes on their premises. Of course, possession and control of such devices and their link to the ISP/Internet are to be managed very carefully. There are normally numerous processes and checks before a BGP route is changed, and these go through what you call change management procedures. 

Sometimes mistakes do get by the checks on the system and can cause disastrous results as it gets replicated across the globe (not just in the country, the entire world). Though it seems that mistakes do happen more often than not.

As part of the lessons learnt, these mistakes are documented and the change management system is updated with these new checks.



Could they Have Solved it Faster?

Most of you would be asking or saying, if it was a simple fix, why couldn't they have solved it sooner? The main problem is that the Facebook network including the Software Defined Networks (SDN) is using the BGP routing. Everything is integrated including their access control. This means their network engineers could not remote into the facilities, the persons working at the facilities don't have the clearance to override the access to the facilities (think their smart card/biometric systems cannot connect to another network to identify the person). 

So in most cases, they would have to send their network engineer teams to the DataCentre sites with the manual override keys and devices to get access to the systems. they may literally have to plug into the equipment using their laptops and console cables to implement the fix.

All of this takes time. That is why you would want to plan all your configurations first before making any changes. And then you implement your changes on one device at a time, then test these changes using simple tools.


Efficiency is Brittle!

Back to the initial statement. Looking at the wider perspective, in the quest to make systems more efficient, we sacrifice redundancies thus creating what can be called a "critical path" in your business process. Should any component in that critical path fail (the weak link), your entire process fails, resulting in lost business value.

So how do we harden our "critical path" business process from such failure?

There are multiple ways that can be implemented, but I am looking at the top three in ascending order of cost (time, money, resources):

1) Look at the likelihood of each part of your process failing, its impact, and how quickly it can be resolved. Develop independent processes that resolve this AND update these processes periodically.

It will be integrated as part of your operational Monitoring and Control of the system.

2) Develop redundancies for each process. Which involve a backup system or configuration protocol. 

3) Develop another redundant path in the process. It's the most costly but gives the safest form of redundancy and resilience.


Takeaway

So what's the takeaway of all this? Always remember that everything is a process. There will always be a weak link that will fail first. You can harden this process or add redundancy/contingency to cater for failure.

But after all that is said and done, everyone makes mistakes, it's your process of fixing them and learning from them that is important.


"You may not live long enough to learn from everyone's mistakes, but at least you can learn from yours..."


Naresh

2021/10/10

Sunday, July 21, 2019

Quick Links: Software Application List

Application List and Shortcuts


The following are the list of applications that I use for systems administration, programming, and general use that may be useful for other persons. It is not an exhaustive list.

Disclaimer: This list is intended for use by Naresh Seegobin only!!! Use at your own risk!!!

No. Application Category Name Used For URL Comments
1 Utilities WinDirStat View Directory Structure . .
2 Utilities YumiLinux Multiboot USB Drives . .
3 Utilities FreeCommander File navigation . .
4 Utilities7-zip File compression and extraction . .
5 Utilities QuickPar/phpar2 File  . .
6 Optical Media Storage ImgBurn Burning to Optical media . .
7 Optical Media Storage DVDisaster Optical Media protection . .
8 Virtualisation VirtualBox Virtualisation . .
9 PDF Viewer Sumatra PDF Viewer . .
10 Terminal Utility Putty . . .
11 FTP FileZilla FTP cleint . .
12 Disk UtilityCrystalDisk Info Disk check . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .



Saturday, June 8, 2019

Net Admin Adventures in 2019 - Raspberry Pi as a DHCP and DNS server


Raspberry Pi as a DHCP and DNS server

Introduction

This blog article talks about migrating my DHCP server away from a wireless router and why you would want to setup a lightweight DHCP server on Linux.

Current Issues with the Existing ASUS RT-N16 Router

I was currently using an ASUS RT-N16 router with Tomato firmware for my wireless (multiple SSIDs), DHCP, DNS, quasi VPN, and internet router. The router was good for its time and is way past its end of life. Even though you can setup OpenVPN on it, the problems is that the NVRAM capacity is too small to handle multiple configurations. Some of these can be VPN certificates, static address assignments etc.

Firmware flashing is one thing but the NVRAM has a limited number of writes and any config or save done, even if it's just a character would mean a write of the entire NVRAM block. As NVRAM is flash memory ,each write to the NVRAM reduces its life.

Point to Note 1) The less changes you can make on your SOHO wireless router the better. As less writes will be made to your NVRAM. and will cause your router to last longer.


Once the NVRAM fails, resetting or firmware updating will not help. Unless you can change your NVRAM chip, best to recycle your router than to keep it as a paperweight. Maybe re-use the power adapter, antenna etc.


With these issues, I eventually migrated most services from this router. The VPN and firewall/internet router features were first migrated, then the wireless was replaced with an Ubiquiti AP and controller. DHCP was the last to migrate. Details of why it's the last will be explained later.

In addition to limited NVRAM capacity, the age or the router begins to show with the time it takes to boot up and start issuing IP addresses. Integrated Services Routers are great but with too many components and to keep costs down the components are cheaper resulting in a shorter life span than enterprise ISRs.

Slow boot up times means slow issuing of IP addresses, means devices cannot get access to the internet when they startup.

This is more for power failures, where once power is restored, all running devices will power on automatically. Even with a UPS, if the battery is drained and shuts down, the same process will happen as if there were no UPS. It is possible to add a delay when devices are powered on (a-la breakermatic devices), but that's for another blog post.

Either way devices boot up and start looking for an IP address. No IP from a DHCP server means APIPA assigned IP addresses or none at all. Statically assigned devices are great but it's difficult to do this for IoT.

Rule of thumb 1) for static IP addresses. ALWAYS have a reservation for these static IP devices. Preferably with their correct MAC addresses. Why? Should the devices' network configuration be reset, at least the IP address will be the same. Do this for VM's where their MAC addresses can change by the host if live migrations are done between hosts. Once migration is completed and the MAC address is changed, just update it in the reservation and keep a note of the previous MAC address.

Anyhow, back to getting a replacement wireless router, even though it's a good idea to get a replacement wireless router as it's newer, faster etc. the problems is that you will want to use it for more than just a DHCP server. And the more services you use on it, the more critical it becomes to your day to day operations, which means you would not want to restart it for maintenance, or if it fails (hardware or software).

Separating the services is a good idea at home or SOHO once the server hosting the service is low cost, manageable and lightweight.

Low cost is for obvious reasons, can't spent USD $200 on a simple DHCP server. You may want to then beef it up into a bigger server (RAM, storage), and fall into the trap of maximising the resources that you have.

Manageable, in that it doesn't take too much time to setup AND time to maintain it.

Lightweight in that it uses little resources in terms of CPU, Memory and disk space, maybe even low power consumption.


So the options are:
1) Windows DHCP server in a VM.
2) Windows Server in a VM with a third party DHCP server <-- don't know why you want to do this.
3) Windows DHCP server on a Micro PC (your USD $200 item)
4) Linux DHCP server (minimum or no GUI) in a VM.
5) Linux DHCP server (minimum or no GUI) on a MicroPC
6) Linux DHCP server on a single board ARM proc PC (e.g. Raspberry Pi).

Options 1) and 2) are easy to setup but are resource heavy (yeah, you say you can use Server Core or Nano) but management is too much for a home or SOHO. Then there are the licensing costs and the updates.

Option 3) is more that you will want to maximise your MicroPC.

Options 4) and 5) are great. No licensing costs, and the configurations are simple and stable.

Running in a VM there is the risk of the host failing AND the VMs may take too long to boot up. Then there is the chicken and egg scenario with the host waiting for the DHCP VM to boot up to lease IPs.

Rule of Thumb 2) For DHCP, get the fastest boot up device that can host this service.

DHCP is not supposed to be heavy. DNS maybe is but that's an added feature you can choose to use or not on the same device.

You may think a CISCO router can do the job, but tell me how long does it take to boot up?

You say a router should always be on backup power so restarts are minimal? What about your DHCP server? It can also be on backup power.

You say a router uses very little power? How about the RaspBerry Pi? is 10W of power too much?

So you see where this is going.

Option 6) is the best option so far. A low powered device that boots up fast and can run Linux.
Pi it is.

Why a Linux Alternative?

So why a Linux alternative to the router?
If you need to include optional information in your DHCP leases, unless the routers use a proper implementation of a Linux kernel/flavour you may not be able to add more options and additional features.

A lightweight version of Linux will boot fast.
It is highly customisable.
I need it to supply the following options:

dhcp-option=3,192.168.1.1
dhcp-option=5,192.168.1.1
domain=[DOMAIN-NAME]
Should you look at implementing a Linux based DHCP server in your enterprise network and need to add KMS server options in your DHCP leases use the following options in your config file:

srv-host=_vlmcs._tcp.[DOMAIN-NAME],[HOSTNAME1].[DOMAIN-NAME],1688,1
srv-host=_vlmcs._tcp.[DOMAIN-NAME],[HOSTNAME2].[DOMAIN-NAME],1688,2


More info on KMS and DNS can be found at Univ. Cornell's website, and Eric Ellis' blog.


Later on I would include next servers in my DHCP leases.

Btw, I tried using SOPHOS UTM for Home (https://www.sophos.com/en-us/products/free-tools/sophos-utm-home-edition.aspx) for DHCP but it doesn't have the ability to add these options and/or are not properly documented. Plus, as a UTM, its boot up time will be slow, (based on whatever HW it is running on) so not good for home with power failures and IoT. Actually it's boot up time is faster than my ageing ASUS RT-N16 router.


Which DHCP server service to use?
I prefer DNSMASQ (not DHCPD) is the best Linux DHCP server.
All the configurations are done via ssh and the modifications of the corresponding config file. DNSMASQ is highly customisable.

Even though DHCPD has a feature for load balancing DHCP servers, it doesn't work too well. At that point I didn't plan to setup or have an active passive failover implementation using DHCPD so I never bothered to continue to use it. DNSMASQ doesn't have load balancing (it would be a nice feature to implement in the future) but with config files, it can facilitate an active/passive implementation. That will be for another blog.

So a lightweight version of Linux running DNSMASQ on a Raspberry Pi is the best option for me.

Pi Problems

In my initial attempts to use a Raspberry Pi and Raspian was that the MAC address was always changing each time the Pi was restarted/power cycled. Not good if you want to set a reservation based on MAC address and setting a static IP address was also a pain. After days of searching, I found no definitive solution online, forum or even blog post. Plus, I didn't want to spend time researching a solution (and then blogging about it :)

So if I cannot set a static IP address, the Pi is worthless as a DHCP server. Project shelved for over a year.

Until a good friend introduced me to DietPi, https://www.dietpi.com/

Lighter than Raspbian and can actually configure a static IP address without hooking it up to a TV or monitor.

For my adventures in the DietPi, check this blog article out, "Net Admin Adventures in 2019 - DietPi on Raspberry Pi as DHCP and DNS server".

Summary

Ageing wireless routers have a limited lifespan due to their low cost components. The NVRAM is the most affected with its limited writes.
In selecting a replacement device, a Raspberry Pi can adequately function as a DHCP server once you can get Linux distribution to run on it allowing easy configuration of static IP address.

Step 1) Identify services to migrate.
Step 2) Identify your various hardware options, cost, configurability, maintenance and resource overhead.
Step 3) For each service, look at the various options that can be used. If it can be done on a low-powered device than a VM it may be the better option.
Step 4) Select your hardware/software solution and implement.

Point to Note 1) The less changes you can make on your SOHO wireless router the better. As less writes will be made to your NVRAM. and will cause your router to last longer.
Rule of thumb 1) for static IP addresses. ALWAYS have a reservation for these static IP devices. Preferably with their correct MAC addresses.
Rule of Thumb 2) For DHCP, get the fastest boot up device that can host this service.

Read my blog "Net Admin Adventures in 2019 - DietPi on Raspberry Pi as DHCP and DNS server" for the next steps on the Raspberry Pi.

Naresh
2019/06/08

Saturday, August 16, 2014

Oracle 12c on Oracle Linux 6.5 on Hyper-V (Server 2012R2 Updated) on Storage Spaces with De-duplication

 What more do you need.... :)


The following are for those who encountered some issues with installing Oracle Linux 6.5 on Hyper-V in the hopes of installing the Oracle 12c database on top of Oracle Linux 6.5.
Though Oracle 12c on Oracle Linux 6.5 is not supported on Hyper-V by Oracle, it may be necessary for the purpose of demonstrations and proof of concept.


Configuration environment:
  1. Server 2012R2 OS With GUI Updated
  2. Storage spaces enabled on,
  3. 4 drives in mirror mode
  4. De-Duplication enabled with 0 days schedule time (always applying de-duplication algorithms on all files)
Hyper-V Environment:
  1. OS with 4GB RAM
  2. 4 core CPU and set with maximum compatibility for live migration
  3. Boot disk, IDE, 320 GB
  4. Additional SCSI disk, 320 GB for u01 partition



Major symptoms encountered:
  1. Installation of Oracle Linux 6.5 went well (partition layout of /boot, 16GB SWAP, rest as / partition as default of ext4) but on reboot freezes on loading some modules.
  2. Some cases, loading completes and user able to login with root (now the real problem happens)
  3. Oracle Linux 6.5 OS keeps on using a lot of memory, giving up to 12GB RAM will still result in its using the swap partition.
  4. Attempting to format SCSI 320 GB disk (/dev/sdb) as MBR and ext4.
    1. Mounting works but cannot browse using Gnome explorer, nor ls via terminal.
    2. Un-mounting operation freezes, cannot umount drive.
    3. Shutting down OS will wait at un-mounting file systems.
    4. Force power down is necessary.

Could it be a configuration issue?








Mitigation Step 1 Attempted:
  1. Using kickstart script from oracle dev days VM's
  2. Formatting and mounting works but cannot browse using Gnome explorer, nor ls via terminal.
  3. Un-mounting operation freezes, cannot umount drive.
  4. Shutting down OS will wait at un-mounting file systems.
  5. Force power down is necessary.
 Same Symptoms Encountered:
  1. mkdir /u01/* failed
  2. umount /u01 failed
  3. repeated attempts of re-installation failed
  4. Shutting down OS will wait at un-mounting file systems
  5.  Force power down is necessary


Could it be  a storage spaces (and de-duplication) problem?




Mitigation Step 2 Attempted:
  1. Create VM on a host without de-duplication or storage spaces
  2. Oracle Linux used about 400MB RAM, no excessive usage of swap partition
  3. VM was configured with 2100 MB RAM
  4. Initial attempt worked with formatting and mounting of /dev/sb1 as /u01
  5. Gnome explorer worked, ls /u01 worke
 Same Symptoms Encountered:
  1. mkdir /u01/* failed
  2. umount /u01 failed
  3. repeated attempts of re-installation failed
  4. Shutting down OS will wait at un-mounting file systems
  5. Force power down is necessary

Could it be a disk IDE/SCSI problem?

Mitigation Step 3 Attempted:
  1. Create VM with both drives as IDE0 and IDE1
 Same Symptoms Encountered:
  1. mkdir /u01/* failed
  2. umount /u01 failed
  3. repeated attempts of re-installation failed
  4. Shutting down OS will wait at un-mounting file systems
  5. Force power down is necessary 



Beyond an IDE/SCSI disk problem?

 Mitigation Step 4 Attempted:

  1. Create VM with disk0 as IDE and disk1 as SCSI 
  2. Installed as normal but used ext3 partitions

Final Result (What everyone is waiting for!!!):
  1. Installation of Oracle Linux 6.5 worked
  2. Formatting, mounting of /dev/sdb1 as /u01 worked
  3. mkdir /u01/temp worked
  4. unmounting worked
  5. reboot worked.


At this point installation of Oracle 12c was successful....

 -------------------------------------------------------------------------------------------------------------------

Conclusion (IMHO)

The implemetation of ext4 on top of Hyper-V may be an issue causing disk processes to be in a busy wait.

Using ext3 worked as it has less resilient features implemented as compared to ext4.


Further tests:
xfs, btrfs etc...




Googling for a solution at this time (2014/08/10) did not produce any valid results but if anyone has encountered something similar with a working solution please share your experiences by posting in the comments or pm me.