Home Lab Proxmox Upgrade and using Lenovo/AMD DASH

What is DASH?

This is what Lenovo says

Dash (desktop and mobile architecture for system hardware) is a set of specifications developed by
dmtf, which aims to provide open standards based web service management for desktop and mobile
client systems. Dash is a comprehensive framework that provides a new generation of standards to
protect the security of out of band and remote management of desktop and mobile systems in multi
vendor, distributed enterprise environments. Dash uses the same tools, syntax, semantics, and
interfaces across the product line (traditional desktop systems, mobile and laptop computers, blade
PCs, and thin clients).

https://download.lenovo.com/pccbbs/thinkcentre_pdf/thinkstation_p620_dash_configuration_guide_v1.3.pdf

Why are you using it?

I’ve upgraded my homelab Proxmox instance to a Lenovo ThinkCenter M75q Gen2 with an AMD Ryzen Pro

https://www.lenovo.com/ca/en/p/desktops/thinkcentre/m-series-tiny/thinkcentre-m75q-gen-2/11jn002nus

  • Processor: AMD Ryzen™ 5 PRO 5650GE Processor (3.40 GHz up to 4.40 GHz)
  • Operating System: Windows 11 Pro 64
  • Graphic Card: Integrated AMD Radeon™ Graphics
  • Memory: 8 GB Non-ECC DDR4-3200MHz (SODIMM)
  • Storage: 256 GB SSD M.2 2280 PCIe TLC Opal
  • AC Adapter / Power Supply: 65W
  • Networking: Integrated Gigabit Ethernet
  • WiFi Wireless LAN Adapters: Intel® Wireless-AC 9260 2×2 AC & Bluetooth® 5.1 or above

It’s not a bad system, moving from a Dell OptiPlex 7060

  • Processor: Intel(R) Core(TM) i7-8700T CPU @ 2.40GHz
  • Memory: 32 GB Non-ECC DDR4 Memory Non-ECC 2666MHz

Upgrades to the ThinkCenter M75q Gen2

Since there’s only 8GB at purchase, I opted for Non-ECC 32GB kit, the ThinkCenter M75q Gen2 supports up to 64GB but felt that it wasn’t necessary to go 64GB yet. Also I didn’t do ECC as this was a low cost upgrade.

I dropped in a Kingston NVMe v4 1TB drive and a 1TB SSD drive just because I had it laying around.

AMD Management Console Downloads

You will need to download the AMD Management Console

https://www.amd.com/en/technologies/manageability-tools

Setting up DASH

I was able to enable DASH supprt in the BIOS but didn’t find any configuration options.

Additional Resources

Archiving Facebook Messages and Facebook Marketplace Messages

Too many Facebook Messages

I had a ton of Facebook Marketplace messages that I was annoyed with and wanted archived. So I found many resources online on using the Chrome console to run javascript to archive the messages.

Javascript Gist and More

I found a gist with the needed code, but it didn’t work. Upon reading the comments, there was updated code buried and another Github repository.

Archive all of the messages in your Facebook Messages Inbox · GitHub
Archive all of the messages in your Facebook Messages Inbox – archive-all-facebook-messages.js
gist.github.com

My modified code

I took the original code and modified it with ChatGTP. I had it limit the number of times the script would run, essentially, how many messages it would archive. I also added a delay, to make sure I didn’t get blocked by Facebook.

let executionCount = 0;

function run() {
  if (executionCount >= 100) {
    return;
  }
  
  let all = document.querySelectorAll('div[aria-label="Menu"]');
  if (all.length == 0) return;
  
  let a = all[1];
  a.click();
  
  let menuitems = document.querySelectorAll('div[role=menuitem]');
  let archiveChatRegex = /Archive chat/;
  
  for (let i = 0; i < menuitems.length; i++) {
    if (archiveChatRegex.test(menuitems[i].innerText)) {
      menuitems[i].click();
    }
  }
  
  executionCount++;
  setTimeout(run, 200); // Delay of 1 second (1000 milliseconds)
}

run();

Importing Large .ics file into Gmail or Google Workspace Calendar

Importing a large Gmail or Google Workspace calendar that, when exported as a .ics file and larger than 1MB, will fail. This is due to the Gmail interface’s 1MB limit on processing .ics files.

The solution is to split up the .ics file, which you can do manually or using the following python script.

GitHub – druths/icssplitter: A script to split up big ics files
A script to split up big ics files. Contribute to druths/icssplitter development by creating an account on GitHub.
github.com

Using Cloudinit and Netplan with IP’s on a different Network and Gateway

If you’ve ever had to utilize a hosting provider that offers the option to buy extra IPs or failover IP addresses, you may have observed instances where these IPs shared the same gateway as your original IPs, rather than being part of the additional IP network.

Here are some of the providers I’m aware of that require this.

  • OVH
  • SoYouStart

The problem is when you use Cloudinit to deploy your VM’s on Ubuntu which uses netplan and unfortunately, there isn’t a method to configure netplan through Cloudinit to use a gateway that isn’t on the same network as the IP address.

I’m using Proxmox, and although you can create a custom network configuration for netplan.yml and deploy it as a snippet via Cloudinit. This isn’t ideal.

Canonical looks to have fixed the bug this year (2023) in January https://github.com/canonical/cloud-init/pull/1931

However, that most likely relates to the new Ubuntu LTS. I’ve tested this within Ubuntu 20.04, and the appropriate config is in place. Here’s the generated /etc/netplan/50-cloud-init.yaml

root@srv01:~# cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eth0:
            addresses:
            - 147.135.0.0/24
            match:
                macaddress: 02:00:00:79:e4:73
            nameservers:
                addresses:
                - 213.186.33.99
                search:
                - domain.com
            routes:
            -   on-link: true
                to: default
                via: 15.0.0.254
            set-name: eth0
        eth1:
            dhcp4: true
            match:
                macaddress: 8a:ca:d3:4d:c9:28
            set-name: eth1
    Reddit – Dive into anything
    www.reddit.com
    BUG: No routing in VM with cloud init (ubuntu 18.x – 19.4) | Proxmox Support Forum
    It´s possible a bug in the network setting from proxmox to VMs with cloud-init and ubuntu. I have see many forum entries about the same problemas! The big…
    forum.proxmox.com

    https://linuxconfig.org/how-to-add-static-route-with-netplan-on-ubuntu-22-04-jammy-jellyfish-linux

    Using Monit Environment Variables with exec

    If you read the Monit documentation, it tells you exactly how to use Monit environment variables when using exec.

    No environment variables are used by Monit. However, when Monit executes a start/stop/restart program or an exec action, it will set several environment variables which can be utilised by the executable to get information about the event, which triggered the action.

    https://mmonit.com/monit/documentation/monit.html#ENVIRONMENT

    I can be smart, but sometimes I can be daft. You don’t want to use the variables within your monit configuration, but instead, you want to use these variables in your exec script.

    Here’s a great example of how to use $MONIT_EVENT. First set up a monit check

    check system $HOST-steal
        if cpu (steal) > 0.1% for 1 cycles
            then exec "script.sh"
            AND repeat every 10 cycles
    

    Now here’s script.sh which will use $MONIT_EVENT

    #!/bin/bash
    echo "Monit Event: $MONIT_EVENT" | mail -s "$MONIT_EVENT" [email protected]

    I was in a rush and felt I had to post this to help others who might overlook this.

    Large Mail Folder and imapsync Error “NO Server Unavailable. 15”

    I was having issues migration the “Sent Items” folder on a hosted Exchange 2013 account to Microsoft 365. The hosted Exchange 2013 server was returning a “NO Server Unavailable. 15” error when trying to select the “Sent Items” folder with 33,000 messages.

    Digging further, I couldn’t find anything until I stumbled upon this thread on the Microsoft forums.

    https://social.technet.microsoft.com/Forums/azure/en-US/2508f50f-6b28-4961-8e6c-5425914d4caa/no-server-unavailable-15-on-exchange-2013?forum=exchangesvrclients&forum=exchangesvrclients

    I’ve come across this issue twice with 2 different exchange 2013 farms while setting up IMAP to use IMAPSync to migrate mail. The issue only happened when accessing 1 folder with lots of mail messages. A simple test is to use OpenSSL to verify the issue like:

    openssl s_client -quiet -crlf -connect mail.domain.com:993
    A01 login domain/user password
    A02 LIST “” *
    A03 SELECT “problem folder”

    IMAP will return: A03 NO Server Unavailable. 15

    After change lots of IMAP settings, the resolution is to enable IMAP protocol logging. It was previously (by default) disabled and this issue would happen. We disabled it again and the problem returned for the same mailbox. Re-enabled logging en voila works.

    Set-ImapSettings -Server <server-name> -ProtocolLogEnabled $true

    Hope this helps someone!

    Cheapest Cold Storage Backup

    Introduction

    Someone posted on a Facebook group looking for the cheapest means for cold storage backups. I did some research and collected some data.

    Response

    Tapes

    All depends, if you have a petabyte or half a petabyte it might work. It might be cheaper to just sync data to another data center. LTO8/9 are 12/18TB uncompressed.

    But you have to buy new tape drives every 5 years as the technology changes and data gets bigger. If you have a tape library that has four drives, it can be costly.

    You also have to rehydrate, sending tapes back and forth and having backup software manage them. You also need to replace tapes due to lifespan or duplicate tapes for redundancy (they’re not 100% reliable).

    For a small setup, you could look at this, which offers to stage. I haven’t used it but looks cool https://eshop.macsales.com/shop/owc-mercury-pro-lto

    You’ll have a massive upfront cost for tape, the drive being the most expensive, then an appropriate HBA which is usually pretty cheap, and then the tapes, which are also relatively inexpensive.

    But what about the storage of the tapes? Do you ship them to a friend? What software are you going to use for backups? There are lots of caveats with tape, even with LTFS https://getprostorage.com/blog/lto-ltfs-archiving/

    SSDs + Safety Deposit Box

    You could get a safety deposit box, buy 4x8TB SSD’s M.2 and plop them into this bad boy and get 32GB RAW or less in a software RAID.

    https://www.storagereview.com/…/qnap-tbs-464-mini-all…

    You could buy two or even a spinning disk QNAP and rehydrate every month.

    The only issue is M2 SSD’s are expensive, you’d want a SATA 8TB for around $900 pop. and grab this little guy https://www.storagereview.com/…/synology-diskstation…

    Or you could just buy 2TB SSDs and use a docking station like this https://www.amazon.com/…/ref=cm_sw_r_apan_glt_fabc… 

    Use SSDs like tapes. Just keep an eye on https://diskprices.com/ for the cheapest per GB SSDs. The cheapest SSD out there is the SAMSUNG 870 QVO 4TB.

    You could put the SSDs into an electrostatic bag with a dry pack and seal it 🙂

    Online Storage

    Backblaze B2 (Per TB)

    At USD $5/month/TB this is pretty affordable if you have over 20TB per month you can reach out for reserve capacity which requires time commitments.

    https://www.backblaze.com/…/reserve-capacity-storage.html

    Backblaze Personal Backup (Unlimited)

    At USD $5/month if you can use Backblaze Personal Backup, you can back up an unlimited amount of data. The operating system would just need to be able to see the data. This doesn’t come with versioning.

    Backblaze Largest Personal Backup (2018)

    I saw this thread on Lowend Box about the largest personal backup at Backblaze on the Personal Plan that is $5/month. Granted this is data from 2018.

    https://news.ycombinator.com/item?id=20998010

    Here’s a screenshot of the post.

    Here’s the screenshot from imgur.

    Here’s the original image, in case the one from Imgur get’s taken down.

    S3 Glacier

    Cost USD $3/month/TB, and actually cold storage.

    Mega.nz

    Cost EUR €1.56/month/TB, not cold storage.

    Wasabi

    Cost USD $5.99 TB/month, not cold storage.

    OVH Cloud Storage

    Cost USD %$9.5 TB/month, not cold storage.

    Conclusion

    There really isn’t much of a conclusion, the cheapest solution is Backblaze but you can’t backup NAS devices. Mega.nz seems to be the cheapest.

    Getting Local Time based on Timezone in AirTable

    If you’re using Airtable as a CRM and working with clients in different timezones. You might want to know what their local time is before actioning something perhaps when they’re awake or asleep 🙂

    In your Airtable database, create a column called “Timezone” where you’ll put a supported Timezone for the SET_TIMEZONE function. You can see a list of these timezones at the following link

    https://support.airtable.com/docs/supported-timezones-for-set-timezone

    You will then create a new “formula” column and use the following formula.

    IF( {Timezone} = BLANK() , "" , DATETIME_FORMAT(SET_TIMEZONE(NOW(), {Timezone} ), 'M/D/Y h:mm A'))

    The above code will check if the Timezone field is blank or not, if it’s not blank it will take the current time NOW() and set the Timezone to {Timezone} column and then see the DATETIME_FORMAT.

    You should then see the following in Airtable.

    Synology Redirect Nginx HTTP to https + Allow Letsencrypt

    You can follow this article pretty much all the way.

    https://techjogging.com/redirect-www-to-https-in-synology-nas-nginx.html

    However, it will fail if you use Letsencrypt to generate an SSL Certificate. So you simply need to add the following above the redirect line. Here’s how it should look.

    server {
        listen 80 default_server{{#reuseport}} reuseport{{/reuseport}};
        listen [::]:80 default_server{{#reuseport}} reuseport{{/reuseport}};
    
        gzip on;
        
        location /.well-known/acme-challenge/ {
        # put your configuration here, if needed
        }
    
        server_name _;
        return 301 https://$host$request_uri;
    }

    Of course after you make this change you will need to restart Nginx

    synoservicecfg --restart nginx

    You can add as many locations as you like; once they’re matched, the request will not continue to the redirect at the end of the server {} container.

    This was highlighted in the following Stack Overflow post.

    https://serverfault.com/questions/874090/nginx-redirect-everything-except-letsencrypt-to-https

    WHMCS Lightbox Loading Image in Footer (Cloudflare Issue)

    You might have seen a loading image in the footer of your WHMCS admin page. If you inspect the page, you’ll see it’s got some tags for the lightbox.

    The issue is related to Cloudflare Rocket Loader, you can simply create a page rule to disable Rocket Loader on the admin pages or disable Rocket Loader altogether.

    Source: https://whmcs.community/topic/309599-loading-spinner-admin-area/

    Disable Rocket Loader with Cloudflare Page Rule

    If you wish to disable Rocket Loader for a specific URL then you can use Cloudflare Page Rules using the following configuration.