ssh-import-id-gh – Import your public SSH key from GitHub

The (WordPress) developers often use wp-cli to troubleshoot sites hosted in remote servers. While I don’t recommend working with a production site directly, these days most hosted WP companies offer staging sites with SSH access, with wp-cli pre-installed. In order to log into those, most of us use password authentication that can lead to a number of security issues. For example, finding the username means half the work done. It is easy to find username these days.

Importing the public SSH key is one of the recommended ways to get access to the remote environment these days. I have seen a number of issues while sharing my own public SSH keys to the clients and their DevOps, who experienced trouble before importing the keys successfully onto their servers such as while copy / pasting the key. This takes away precise time if the sites / servers are down.

I use a simple, quick and effective method that has never failed. In every Linux server, a command named ssh-import-id exists for years (since 2014 as far as I know). It goes with the format…

ssh-import-id launchpad_account_name

launchpad_account_name is the name from My launchpad username is pothi. My SSH keys attached with launchpad can be viewed at So, running ssh-import-id pothi will import my public SSH key into any Linux machine.

Since the popularity of GitHub, `ssh-import-id` started supporting SSH keys stored in GitHub. My Github username is pothi and my public SSH key can be found at and also at API URL. As you see, I use ed-25519 key with GitHub and rsa key with launchpad so that I can import either if one is not supported. To import my Github associated public SSH key, one of the following commands can be used…

ssh-import-id-gh pothi
# or
ssh-import-id gh:pothi

For more details on ssh-import-id, please checkout the corresponding man page at

Do you use ssh-import-id or ssh-import-id-gh? If not, what else do you use to get the remote server access?

Bootstrapping DigitalOcean Servers

I manage multiple DigitalOcean servers. There are a number of configurations to be done in order to bring a secure DO droplet. For example, firewall, alerts, etc. I missed a step years ago that caused a low priority security alert lately. So, I automated most of the steps while configuring any DigitalOcean server (new or old).

Like most other things, I open-sourced the project in Github. Please check it out at .

The first step is to get an API key and to get doctl command line interface tool. Then you can initialize doctl with the API key as follows…

# you will be asked the access token
# <NAME> could be "team-name"
doctl auth init --context <NAME>

# Authentication contexts let you switch between multiple authenticated accounts.
doctl auth list
doctl auth switch --context <NAME>

# validate doctl
doctl account get


Firewall is obviously the most important of any server configuration. With DigitalOcean, you don’t have to rely on the server-level firewall such as ufw. With a server-level firewall, it is possibly to lock yourself out of the server. With DigitalOcean (and most other cloud providers), the firewall can be configured and updated on the host level, a layer above the server or the operating system installed in it. Here I used a simple firewall with the name “Basics” that covers the basic stuff…

# create a firewall using minimal outbound rules
Outbound_Rules="protocol:icmp,address:,address:::/0 protocol:tcp,ports:0,address:,address:::/0 protocol:udp,ports:0,address:,address:::/0"
doctl compute firewall create --name $Firewall_Name --outbound-rules "$Outbound_Rules"

# Get FirewallID using FirewallName
Firewall_ID=$(doctl compute firewall ls --format ID,Name --no-header | grep $Firewall_Name | awk '{print $1}')

# Add tags, standard inbound rules and any custom inbound rules
doctl compute firewall add-tags $Firewall_ID --tag-names live,prod


doctl compute firewall add-rules $Firewall_ID --inbound-rules $Inbound_ICMP
doctl compute firewall add-rules $Firewall_ID --inbound-rules $Inbound_HTTP
doctl compute firewall add-rules $Firewall_ID --inbound-rules $Inbound_HTTPS
doctl compute firewall add-rules $Firewall_ID --inbound-rules $Inbound_SSH

# delete a firewall rule
doctl compute firewall remove-rules $Firewall_ID --inbound-rules=$Inbound_SSH

Internal Firewall

If you’d like to allow traffic between servers, you may use the following…

# Internal Firewall


doctl compute firewall create --name $Firewall_Name --inbound-rules "$Internal_10 $Internal_10_udp $Internal_172 $Internal_172_udp $Internal_192 $Internal_192_udp"

Once the firewalls are created, you may attach them to any existing droplets or new droplets (while creating the droplets) using tags. DigitalOcean has some powerful tagging system that works nicely.


A big part of any server setup is monitoring and alerting us upon any usual activities. While monitoring is a complex topic, DigitalOcean allows us to monitor the resources such as the disk space, memory usage, CPU spikes, etc. All these things can be configured in a flash using the following code that becomes applicable to all droplets (existing and new)…

ADMIN_EMAIL=$(doctl account get --format "Email" --no-header)

doctl monitoring alert create --compare "GreaterThan" --value "90" --emails $ADMIN_EMAIL --type "v1/insights/droplet/cpu" --description "CPU is running high"
doctl monitoring alert create --compare "GreaterThan" --value "75" --emails $ADMIN_EMAIL --type "v1/insights/droplet/disk_utilization_percent" --description "Disk Usage is high"
doctl monitoring alert create --compare "GreaterThan" --value "90" --emails $ADMIN_EMAIL --type "v1/insights/droplet/memory_utilization_percent" --description "Memory Usage is high"

What’s still not implemented?

While the existing API is matured enough, it doesn’t have all the functionalities that can be done on the DigitalOcean dashboard. For example, a few months ago, DigitalOcean introduced uptime monitoring that can not be configured via doctl, yet!

Nevertheless, DO API is stable and I highly recommend it if you plan to use DigitalOcean to host your sites or your clients’ sites.

Happy Hosting!

What did I do in 2022 and my WishList for 2023

It’s the year of MikroTik and network engineering. So, there is hardly any WordPress stuff in this post.


I bought MikroTik hAP AX2 this year, just after its launch. I’d been waiting for it for years. This is my fourth MikroTik product for my R&D. Earlier, I owned two numbers of hAP ac2 and SXT LTE Kit. One hAP AC2 runs on Router OS v6. The other one is on the latest version. Using them to test drive both versions in my Tik lab. Both hAP ac2 are older models with 256MB memory. These days, hAP ac2 comes with just 128MB. Never unheard of such going backward on other hardware such as mobiles.

Open Source Projects

Released three open source repositories this year…

  1. MikroTik LTE Scripts – LTE specific scripts. All testing was done on my lone SXT LTE Kit. Planning to get another LTE device in 2024, specifically with large flash storage. Currently, most offer only 16MB! I don’t think MikroTik would release something more than 16MB in 2023. They are slow to upgrade the hardware.
  2. MikroTik Generic Scripts – Generic scripts that can be used in any MikroTik device. Scripts to check for updates, to take backups, to alert upon anomalies, etc.
  3. Backup to S3 – A backup solution for non-wp sites. Currently, supports Laravel and phpBB.

What about WordPress?

No significant progress! I could only maintain my active plugins in I plan to create a new plugin in place of an abandoned one related to Mobile Detect library. Also, there are over 45 draft posts in the back-end. While some of them may not see the light of the day, I wish to publish at least one per month in 2023 (and possibly on the following year too).

Consistency in Learning

If you look at my open source contributions in Github for the year 2022, you can notice that I wasn’t consistent throughout the year, except for the last two months. In 2023, I wish to be consistent in my learning and sharing. Except on weekends and on days of travel, I plan to learn something new and improve something in my open source projects.


A healthy body and mind helps me in improving my skillset too. I didn’t have a healthy lifestyle for decades, even though I didn’t stop learning at any point. I just wasn’t consistent. Only in the last couple of years, I came across TRE(time restricted eating) and its real benefits. While I have been on and off in TRE during the past two years, I wish to be consistent in my TRE schedule. Currently, I skip breakfast and have anything under the earth at other times. I mostly eat between 2pm and 7pm. Infrastructure

Back in 2011, I already wrote a colophon post. Nothing much changed in it in terms of underlying technologies used such as Nginx web server. However, a few things aren’t mentioned in it, but will have a mention here. Basically, I am running most of the services using Google services for this domain ( Even though, I’ve been trying to de-google myself for years, I still use Google services with this domain (and with, mainly to collaborate with those who uses Google services too. Here’s the list of (Google) services that I use for…

Server for site hosting

The server is hosted in Google Cloud (Compute Engine). I’ve been running this under the free tier for years. I still use some paid services for the sack of remembering to use Cloud Engine and other Google services. It’s pretty limited. However, for the kind of traffic this site gets, the free limits are more than enough. :)

Email Hosting

As you may have guessed, I use Google Workspace. It’s been used since its inception too. However, most of my communication has moved to Proton Mail, mainly to improve privacy. Please note that most features that are free with Google are paid in Proton Mail (or are severely limited). To send mass emails, I use Amazon SES, though.

Domain Registry

Domain registrar for is Google Domains. It has changed hands multiple times. Works great most of the time. Offers automated provisioning of SSL / HTTPS for any sub-domain (or root domain). It also offers redirects. I still use for redirects and for automated SSL, though. Redirect pizza offers analytics that is not offered by Google Domains. I also use only for the root domain ( to redirect it to .

SSL Certificate Authority

Google Trust Services provides SSL for this domain since 2023. Earlier, I used LetsEncrypt and for SSL certificates. Since, Google’s root certificates have wider compatibility than the rest, I switch to Google’s free SSL.


Again, I use Google Storage that offers up to 5GB free storage. This is the only service that I use beyond the free limit as my storage requirements are much higher than the free limit. I use one-way backups that helps to improve security.

Version Control

I use Google Source Repositories to keep most of the private repos. I don’t want to keep everything in a single basket (Github). So, using Googe’s only as an alternative. Google doesn’t offer any public repositories. So, it’s just for private repos.

Future course of action

As mentioned earlier, I plan to de-google myself to improve privacy. If any of the above changes in the future, I will update this post accordingly. If I use any additional services too, I will update this post.

But why do I use only free resources?!

You may wondering why I use only free resources (in Google, Amazon SES, etc). Actually, I do pay them. However, it is true that I use mostly free resources on the internet for a specific reason. But, that’s for another post. Stay tuned!

Rate limiting xmlrpc requests on WordPress using Nginx

WordPress based sites are target for most automated bots. Those bots look for various vulnerability in WordPress core, the themes, and the plugins. Then, there are some kids (and their kid bots) that target specific resources in a WP site. “xmlrpc.php” is one such resource. It uses XML-RPC protocol that does many things in WordPress. For example, it helps with remote sites to notify their mentions, to publish (and edit) articles using an app (such as WordPress app for Android / iOS). It is also used by many plugins such as Jetpack.

Naturally, xmlrpc.php file may be called multiple times in a day or in an hour (on a busy site). It may be called multiple times for every minute on a high traffic site. I have seen xmlrpc.php being accessed more frequently even on a site with no traffic too. Those traffic are likely from scan / scam bots, looking for vulnerabilities.

Since, there is no way to cache requests to xmlrpc.php, PHP and MySQL usage tend to go high quickly as every request needs a bit of php and MySQL. As a result, CPU usage spikes up, resulting in a wastage of precise CPU minutes. If we use platforms like AWS EC2, GCP, or Microsoft Azure where every CPU hour is charged, the cost of running a site can increase substantially.

In order to reduce the CPU usage, one solution is to completely block access to xmlrpc.php file. However, since this file is used for genuine purposes too, it is not recommended to disable access to this file. Alternatively, we can rate limit the requests to this file. A genuine request would not call this file multiple times per second. A decent limit is 1 request per second.

Let’s see how to implement rate limiting for xmlrpc in Nginx…

limit_req_zone $binary_remote_addr zone=wp_xmlrpc_limit:10m rate=1r/s;

server {

    location = /xmlrpc.php {
        limit_req zone=wp_xmlrpc_limit;

        fastcgi_split_path_info ^(.+\.php)(/.+)$;

        if (!-f $document_root$fastcgi_script_name) { return 404; }

        # Mitigate vulnerabilities
        fastcgi_param HTTP_PROXY "";

        include                     fastcgi_params;
        fastcgi_index               index.php;
        fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;
        fastcgi_pass                fpm;

    # other location blocks such as location / {}

Basically, we define a separate location block to process xmlrpc.php file and then insert two lines of code to introduce rate limit. The first line (starting with limit_req_zone) should be defined outside of server block. The other one (starting with limit_req) should be defined inside the newly introduced location block.

In the above code, we limited the requests at 1 request per second. We can fine-tune it depending on our use-case. There are other areas to fine-tune too such as implementing a separate log for xmlrpc requests. That’s for another day!

Happy Hosting!

Disable PHP warnings when running wp-cli

It is not uncommon to test sites on a development environment (locally or in a staging environment where others can see the work-in-progress). On a development environment, usually we have configured WP_DEBUG to be true. Here’s the sample of wp-config.php file in a development / test / staging environment…


<span class="hljs-function"><span class="hljs-keyword">define(<span class="hljs-string">'WP_CACHE'</span>, <span class="hljs-literal">false</span>)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">define(<span class="hljs-string">'DB_NAME'</span>, <span class="hljs-string">'actual_db'</span>)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">define(<span class="hljs-string">'DB_USER'</span>, <span class="hljs-string">'db_user'</span>)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">define(<span class="hljs-string">'DB_PASSWORD'</span>, <span class="hljs-string">'Super_Secret_Passw0rd'</span>)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">define(<span class="hljs-string">'DB_HOST'</span>, <span class="hljs-string">'localhost'</span>)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">define(<span class="hljs-string">'WP_DEBUG'</span>, <span class="hljs-literal">true</span>)</span></span>;

/<span class="hljs-regexp">/ Other directives such as salts...</span>
<span class="hljs-regexp"></span>

While the above code is perfectly okay, if the site creates PHP warnings, it is a nuisance to see them repeatedly when using wp-cli multiple times. Even if you try configuring error_reporting to various values and turn off everything under the hood, you may still see PHP warning with the above code. It’s because the the warning are configured to be displayed when WP_DEBUG is set to true. In order to disable WP_DEBUG only for wp cli operations, the following modified code can be used…


define('WP_CACHE', false);

define('DB_NAME', 'actual_db');
define('DB_USER', 'db_user');
define('DB_PASSWORD', 'Super_Secret_Passw0rd');
define('DB_HOST', 'localhost');

if(!defined('$_SERVER["HTTP_HOST"]')) define('WP_DEBUG', false);
if( !defined( WP_DEBUG ) ) define('WP_DEBUG', true);

// Other directives such as salts...

Basically, we added the following code into the original code. The first line checks if it is a visit from a browser or from command line. When we evoke wp from command line, it doesn’t send HTTP_HOST. This way, we can tweak WP_DEBUG depending on the presence (or absence) of HTTP_HOST.

if(!defined('$_SERVER["HTTP_HOST"]')) define('WP_DEBUG', false);
if( !defined( WP_DEBUG ) ) define('WP_DEBUG', true);

The above code eliminates most PHP warnings when running WP-CLI. I hope this helps someone. There are multiple steps involved in getting a perfect development, local, test or staging environment for a WordPress site. If you are looking for a perfect hosting environment or a customized server to meet a better workflow for your developments, please get in touch.

Nginx compatibility for “Cookies for Comments” plugin

Whether you are aware or not, spammers are more interested in your site than anyone else. You’ll understand this more vividly, when your blog starts to become famous and brings more and more visitors week after week, month after month, year after year.

The most annoying thing about spam comments is the amount of time that you need to waste in dealing with it. There are some bright minds in WordPress that help you save your time with spam comments. One such person is Donncha, who put together a nice plugin named Cookies for Comments that blocks the spam at the server level, in such as a way, it doesn’t even reach WordPress. Because, all the work is done by the server itself. Here, we show an example code for Apache and Nginx web server. It could be migrated any web server in general.

Integrating with Apache is straightforward. The code to configure Apache is displayed at the bottom of plugin’s settings page at It’d look like this…

# If you're feeling adventerous, you can add the following two lines before
# the regular WordPress mod_rewrite rules in your .htaccess file.
# They will stop comments from spambots before they reach
# the database or are executed in PHP:

RewriteCond %{HTTP_COOKIE} !^.*abcdefghijklmnopqrstuvwxyz0123456789.*$
RewriteRule ^wp-comments-post.php - [F,L]

In the above code, the value of abcdefghijklmnopqrstuvwxyz0123456789 may change for each site. It is also part of the name of the cookie set by this plugin.

In Nginx, the code is little different. Here’s the actual code…

# support for cookies for comments plugin!
location = /wp-comments-post.php {
    if ($http_cookie !~* "abcdefghijklmnopqrstuvwxyz0123456789") { return 403; }
    # rest of the code to process PHP.

Considering Akismet can not be used on a commercial site, this solution works great. With Akismet, there is a lot going on behind the scene. With ‘Cookies for Comments’ plugin, a cookie is set for all the visitors and it is checked when a comment is posted by the same visitor. Since, this plugin sets a cookie for all visitors, you may use GDPR consent to include this cookie at the top of every comment form. At least, you could inform about cookies before they comment like how it is done on this site’s comment box…

comment form with cookie warning
Sample comment form showing a warning of cookies being used!

By adding just two lines of code, we can save a lot of trouble and frustration in the long run. If you have any other method to tackle spam, please share it in the comments!

Buypass CA – SSL with 180 Days Validity

Buypass is a Certificate Authority (CA) based on Europe. It offers free SSL certificates with a validity of 180 days. Unlike LetsEncrypt, Buypass CA also offers paid SSL too. So, it is neither a competitor to LetsEncrypt, nor it is a nonprofit. It is a for-profit company that also offers free SSL certificates. There are other CAs that offer free SSL certificates too. However, Buypass CA offers ACME API that is compatible with LetsEncrypt. For example, certbot can be used to authenticate the domain and obtain free SSL certificates.

Starting Afresh

Certbot is the recommended tool / client-side software. However, the procedure for test certificates and live certificates are slightly different, if you have used LetsEncrypt previously.

Here’s the procedure to get started with Buypass CA using certbot…

sudo certbot register --server ''

The above command would do the following…

  • ask for your email
  • option to agree or disagree with the terms of service
  • an option to share your e-mail address with EFF

If you would like to shorten this long process, you may use the following one-liner, replacing ‘YOUR_EMAIL’ with your actual email address…

sudo certbot register -m 'YOUR_EMAIL' --no-eff-email --agree-tos --server ''

Once the email is registered, we are free to test drive the domain authentication and fetching the test SSL certificates by running the following command…

sudo certbot certonly --webroot -w /var/www/ -d -d --server ''

Please know that the test certificates can not be used on live sites.

The above command issues real certificates for testing purpose, even though the test certificates cant’ be used on live domains. Since, the test certificates are real, we have to remove them before fetching live SSL certificates for live domains. We can remove the test SSL certificates using the following command and selecting correct options when prompted…

sudo certbot delete

Output of the above command would look something similar to the following…

Saving debug log to /var/log/letsencrypt/letsencrypt.log

Which certificate(s) would you like to delete?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

Please be careful on selecting the correct certificates to delete. If you hit “Enter” key without reading the above, you are likely to lose all the certificates listed in it, including the live SSL certificates, if any. If everything goes well, it is time to go live.

Obtaining Live Certficates

Once testing is successful, obtaining the live certificates is likely to go through as expected. The only difference between the test and live environment is the server URL. For live environment, Buypass CA uses “”.


While the advantage of using Buypass CA is in the extended validity, compared to LetsEncrypt, there are a few limitations…

  • The free Go SSL certificates from Buypass CA doesn’t allow wildcard. It doesn’t mean wildcard isn’t supported at all. Wildcard is a paid product from Buypass CA.
  • Total number of domains that we can attach to a single SSL certificate is limited to only two (enough for 99% of the sites on the internet). So, it can easily cover the bare / root domain and the www version
  • There is no dry-run. As seen earlier, the testing process is bit complicated than LetsEncrypt where we can do “dry-run” of authentication. However, with Buypass CA, we authenticate the domain/s, and then fetch test SSL certificates that need to be deleted before fetching the live SSL certificates.

Switching from LetsEncrypt

Switching from LetsEncrypt isn’t hard. Delete the existing certificate and do the above steps. If you ever go wrong, you can always go back and re-issue a free SSL certificate from LetsEncrypt.


Overall, SSL certificate with 180-days validity is the main reason to go with Buypass Go SSL. Also, if you are a person like me who doesn’t always depend on a single entity (even it means nonprofit), then this is a real alternative to LetsEncrypt. Compatibility with ACME API makes it easier to switch from existing LetsEncrypt installations where only the bare domain and www version need to be covered under HTTPS.