Building Chromium in MacOS with a Linux icecc cluster

Many times, during these last months, I thought to keep updated my blog writing a new post. Unfortunately, for one or another reason I always found an excuse to not do so. Well, I think that time is over because finally I found something useful and worthy the time spent time on the writing.

– That is OK but … what are you talking about?.
– Be patient Pablo, if you didn’t skip the headline of the post you already know about what I’m talking, probably :-).

Yes, I’m talking about how to setup a MacPro computer into a icecc cluster based on Linux hosts to take advantage of those to get more CPU power to build heavy software projects, like Chromium,  faster. The idea besides this is to distribute all the computational work over Linux nodes (fairly cheaper than any Mac) requested for cross-compiling tasks from the Mac host.

I’ve been working as a sysadmin at Igalia for the last couple of years. One of my duties here is to support and improve the developers building infrastructures. Recently we’ve faced long building times for heavy software projects like, for instance, Chromium. In this context, one of the main  issues that I had to solve is  how to build Chromium for MacOS in a reasonable time and avoiding to spend a lot of money in expensive bleeding edge Apple’s hardware to get CPU power.

This is what this post is about. This is an explanation about how to configure a Mac Pro to use a Linux based icecc cluster to boost the building times using cross-compilation. For simplicity, the explanation is focused in the singular case of just one single Linux host as icecc node and just one MacOS host requesting for compiling tasks but, in any case, you can extrapolate the instructions provided here to have many nodes as you need.

So let’s go with the the explanation but, first of all, a summary for those who want to go directly to the minimal and essential information …

TL;DR

On the Linux host:

# Configure the iceccd
$ sudo apt install icecc
$ sudo systemctl enable icecc-scheduler
$ edit /etc/icecc/icecc.conf
ICECC_MAX_JOBS="32"
ICECC_ALLOW_REMOTE="yes"
ICECC_SCHEDULER_HOST="192.168.1.10"
$ sudo systemctl restart icecc

# Generate the clang cross-compiling toolchain
$ sudo apt install build-essential icecc
$ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git ~/depot_tools
$ export PATH=$PATH:~/depot_tools
$ git clone https://github.com/psaavedra/chromium_clang_darwin_on_linux ~/chromium_clang_darwin_on_linux
$ cd ~/chromium_clang_darwin_on_linux
$ export CLANG_REVISION=332838  # or CLANG_REVISION=$(./get-chromium-clang-revision)
$ ./icecc-create-darwin-env
# copy the clang_darwin_on_linux_332838.tar.gz to your MacOS host

On the Mac:

# Configure the iceccd
$ git clone https://github.com/darktears/icecream-mac.git ~/icecream-mac/
$ sudo ~/icecream-mac/install.sh 192.168.1.10
$ launchctl load /Library/LaunchDaemons/org.icecream.iceccd.plist
$ launchctl start /Library/LaunchDaemons/org.icecream.iceccd.plist

# Set the ICECC env vars
$ export ICECC_CLANG_REMOTE_CPP=1
$ export ICECC_VERSION=x86_64:~/clang_darwin_on_linux_332838.tar.gz
$ export PATH=~/icecream-mac/bin/icecc/:$PATH

# Get the depot_tools
$ cd ~
$ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
$ export PATH=$PATH:~/depot_tools

# Download and build the Chromium sources
$ cd chromium && fetch chromium && cd src
$ gn gen out/Default --args='cc_wrapper="icecc" \
  treat_warnings_as_errors=false \
  clang_use_chrome_plugins=false \
  use_debug_fission=false \
  linux_use_bundled_binutils=false \
  use_lld=false \
  use_jumbo_build=true'
$ ninja -j 32 -C out/Default chrome

… and now the detailed explanation

Installation and setup of icecream on Linux hosts

The installation of icecream on a  Debian based Linux host is pretty simple. The latest version (1.1) for icecc is available in Debian testing and sid for a while so everything that you must to do is install it from the APT repositories. For case of stretch, there is a backport available  in the apt.igalia.com repository publically available:

sudo apt install icecc

The second important part of a icecc cluster is the icecc-scheduler. This daemon is in charge to route the requests from the icecc nodes which requiring available CPUs  for compiling to the nodes of the icecc cluster allowed to run remote build jobs.

In this setup we will activate the scheduler in the Linux node (192.168.1.10). The key here is that only one scheduler should be up at the same time in the same network to avoid errors in the cluster.

sudo systemctl enable icecc-scheduler

Once the scheduler is configured and up, it is time to add icecc hosts to the cluster. We will start adding the Linux hosts following this idea:

  • The IP of the icecc scheduler is 192.168.1.10
  • The Linux host is allowed to run remote jobs
  • The Linux host is allowed to run up to 32 concurrent jobs (this is arbitrary decision and can be adjusted per each particular host)
    # edit /etc/icecc/icecc.conf
    ICECC_NICE_LEVEL="5"
    ICECC_LOG_FILE="/var/log/iceccd.log"
    ICECC_NETNAME=""
    ICECC_MAX_JOBS="32"
    ICECC_ALLOW_REMOTE="yes"
    ICECC_BASEDIR="/var/cache/icecc"
    ICECC_SCHEDULER_LOG_FILE="/var/log/icecc_scheduler.log"
    ICECC_SCHEDULER_HOST="192.168.1.10"

We will need to restart the service to apply those changes:

sudo systemctl restart icecc

Installing and setup of icecream on MacOS hosts

The next step is to install and configure the icecc service on our Mac.  The easy way to get icecc available on Mac is icecream-mac project from darktears. We will do the installation assuming the following facts:

  • The local user account in Mac is psaavedra
  • The IP of the icecc scheduler is 192.168.1.10
  • The Mac is not allowed to accept remote jobs
  • We don’t want run use the Mac as worker.

To get the icecream-mac software we will make a git-clone of the project on Github:

git clone https://github.com/darktears/icecream-mac.git /Users/psaavedra/icecream-mac/
sudo /Users/psaavedra/icecream-mac/install.sh 192.168.1.10

We will edit a bit the /Library/LaunchDaemons/org.icecream.iceccd.plist daemon definition as follows:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>Label</key>
    <string>org.icecream.iceccd</string>
    <key>ProgramArguments</key>
    <array>
      <string>/Users/psaavedra/icecream-mac/bin/icecc/iceccd</string>
      <string>-s</string>
      <string>192.168.1.10</string>
      <string>-m</string>
      <string>2</string>
      <string>--no-remote</string>
    </array>
    <key>KeepAlive</key>
    <true/>
    <key>UserName</key>
    <string>root</string>
  </dict>
</plist>

Note that we are setting 2 workers in the Mac. Those workers are needed to execute threads in the host client host for things like linking … We will reload the service with this configuration:

launchctl load /Library/LaunchDaemons/org.icecream.iceccd.plist
launchctl start /Library/LaunchDaemons/org.icecream.iceccd.plist

Getting the cross-compilation toolchain for the icecream-mac

We already have the icecc cluster configured but, before to start to build Chromium on MacOS using icecc, there is still something before to do. We still need a cross-compiled clang for Darwin on Linux and, to avoid incompatibilities between versions, we need a clang based on the very same version that your Chromium code to be compiled.

You can check and get the cross-compilation clang revision that you need as follows:

cd src
CLANG_REVISION=$(cat tools/clang/scripts/update.py | grep CLANG_REVISION | head -n 1 | cut -d "'" -f 2)
echo $CLANG_REVISION
332838

In order to simplify this step.  I made some scripts which make it easy the generation of this clang cross-compiled toolchain. On a Linux host:

  • Install build depends:
    sudo apt install build-essential icecc
  • Get the Chromium project depot tools
    git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git ~/depot_tools  
    export PATH=$PATH:~/depot_tools
  • Download the psaavedra’s scripts (yes, my scripts):
    git clone https://github.com/psaavedra/chromium_clang_darwin_on_linux ~/chromium_clang_darwin_on_linux
    cd ~/chromium_clang_darwin_on_linux
  • You can use the get-chromium-clang-revision script to get the latest clang revision using in Chromium master:
    ./get-chromium-clang-revision
  • and then, to build the cross-compiled toolchain:
    ./icecc-create-darwin-env

    ; this script encapsulates the download, configure and build of the clang software.

  • A clang_darwin_on_linux_999999.tar.gz file will be generated.

Setup the icecc environment variables

Once you have the /Users/psaavedra/clang_darwin_on_linux_332838.tar.gz generated in your MacOS. You are ready to set the icecc environments variables.

export ICECC_CLANG_REMOTE_CPP=1
export ICECC_VERSION=x86_64:/Users/psaavedra/clang_darwin_on_linux_332838.tar.gz

The first variable enables the usage of the remote clang for C++. The second one establish toolchain to use by the x86_64 (Linux nodes) to build the code sent from the Mac.

Finally, remember to add the icecc binaries to the $PATH:

export PATH=/Users/psaavedra/icecream-mac/bin/icecc/:$PATH

You can check and get the cross-compiled clang revision that you need as follows:

cd src
CLANG_REVISION=$(cat tools/clang/scripts/update.py | grep CLANG_REVISION | head -n 1 | cut -d "'" -f 2)
echo $CLANG_REVISION
332838

… and building Chromium, at last

Reached this point, it’s time to build a Chromium using the icecc cluster and the cross-compiled clang toolchain previously created. These steps follows the official Chromium build procedure and only adapted to setup the icecc wrapper.

Ensure depot_tools is the path:

cd ~git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
export PATH=$PATH:~/depot_tools
ninja --version
# 1.8.2

Get the code:

git config --global core.precomposeUnicode truemkdir chromium
cd chromium
fetch chromium

Configure the build:

cd src
gn gen out/Default --args='cc_wrapper="icecc" treat_warnings_as_errors=false clang_use_chrome_plugins=false linux_use_bundled_binutils=false use_jumbo_build=true'
# or with ccache
export CCACHE_PREFIX=icecc
gn gen out/Default --args='cc_wrapper="ccache" treat_warnings_as_errors=false clang_use_chrome_plugins=false linux_use_bundled_binutils=false use_jumbo_build=true'

And build, at last:

ninja -j 32 -C out/Default chrome

icemon allows you to graphically monitoring the icecc cluster. Run it in remote from your Linux host if you don’t want install it in the MacOS:

ssh -X user@yourlinuxbox icemon

; with icemon you should see how each build task is distributed across the icecc cluster.

icemon

Advertisements

Let’s encrypt!

“The Let’s Encrypt project aims to make encrypted connections to World Wide Web servers ubiquitous. By getting rid of payment, web server configuration, validation emails, and dealing with expired certificates it is meant to significantly lower the complexity of setting up and maintaining TLS encryption.”Wikipedia

Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a public benefit organization. Major sponsors are the Electronic Frontier Foundation (EFF), the Mozilla Foundation, Akamai, and Cisco Systems. Other partners include the certificate authority IdenTrust, the University of Michigan (U-M), the Stanford Law School, the Linux Foundation as well as Stephen Kent from Raytheon/BBN Technologies and Alex Polvi from CoreOS.

So, in summary, with Let’s Encrypt you will be able to request for a valid SSL certificate, free of charges, issued for a recognized CA and avoiding the mess of the mails, requests, forms typical of the webpages of the classic CA issuers.

The rest of the article shows the required steps needed for a complete setup of a Nginx server using Let’s encrypt issued certificates:

Install letsencrypt setup scripts:

cd /usr/local/sbin
sudo wget https://dl.eff.org/certbot-auto
sudo chmod a+x /usr/local/sbin/certbot-auto

Preparing the system for it:

sudo mkdir /var/www/letsencrypt
sudo chgrp www-data /var/www/letsencrypt

A special directory must be accesible for certificate issuer:

root /var/www/letsencrypt/;
location ~ /.well-known {
allow all;
}

Service restart. Let’s encrypt will be able to access to this domain and verify the athenticity of the request:

sudo service nginx restart

Note: At this point you need a valid A DNS entry pointing to the nginx server host. In my example, I will use the mydomain.com domain as an example.

Requesting for a valid certificate for my domain:

sudo certbot-auto certonly -a webroot --webroot-path=/var/www/letsencrypt -d mydomain.com

If everything is ok, you will get a valid certificate issued for your domain:

/etc/letsencrypt/live/mydomain.com/fullchain.pem

You will need a Diffie-Hellman PEM file for the crypthographic key exchange:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Setting the nginx virtual host with SSL as usual but using the Let’s encrypt issued certificate:

...
ssl on;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
...

Check the new configuration and restarting the Nginx:

nginx -t
sudo service nginx restart

Renewing the issued certitificate:

cat >> EOF > /etc/cron.d/letsencrypt
30 2 * * 1 root /usr/local/sbin/certbot-auto renew >> /var/log/le-renew.log
35 2 * * 1 root /etc/init.d/nginx reload
EOF

The NFS 16 groups limit issue

The last Friday I was involved in a curious situation trying to setup a NFS server. The NFS server was mounted in UNIX server which was using UNIX users accounts assigned to many groups. These users were using files and directories stored in the NFS server.

As brief description of the situación which incites this post, I will say that the problem occurs when you are using UNIX users which are assigned in more than 16 UNIX groups. In this scenario, if you are using NFS (whatever version) with the UNIX system authentication (AUTH_SYS), quite common nowadays in spite of the security recommendations, you will get a permission denied during the access to certain arbitrary files and directories. The reason is that the list of secondary groups assigned to the user is truncated by the AUTH_SYS implementation. That is simple amazing!

Well, to be honest, this is not an unknown NFS problem. This limitation is here, around us, since the early stages of the modern computing technology. After a quick search on Internet, I found the reason why this happens and it is not a NFS limitation but it is a limit specified on AUTH_SYS:

   The client may wish to identify itself, for example, as it is
   identified on a UNIX(tm) system.  The flavor of the client credential
   is "AUTH_SYS".  The opaque data constituting the credential encodes
   the following structure:

         struct authsys_parms {
            unsigned int stamp;
            string machinename<255>;
            unsigned int uid;
            unsigned int gid;
            unsigned int gids<16>;
         };

The root cause

AUTH_SYS is the historical method which is used by client programs contacting an RPC server need. This allows the server get information about how the client should be able to access, and what functions should be allowed. Without authentication, any client on the network that can send packets to the RPC server could access any function.

AUTH_SYS has been in use for years in many systems just because it was the first authentication method available but AUTH_SYS is not a secure authentication method nowadays. In AUTH_SYS, the RPC client sends the UNIX UID and GIDs for the user, the server implicitly trusts that the user is who the user claims to be. All the this information is sent through the network without any kind of encryption and authentication, so it is high vulnerable.

In consequence, AUTH_SYS is an insecure security mode. The result is this can be used as the proverbial open lock on a door. Overall  the technical articles about these matters highly suggest the usage of other alternatives like NFSv4 (even NFSv3) and Kerberos, but  yet AUTH_SYS is commonly used within companies, so we must still deal it.

Note: This article didn’t focus in security issues. The main purpose of this article is describe a specific situation and show the possible alternatives identified during the troubleshooting of the issue.

Taking up the thread …

I was profiling a situation where the main issue was leaded by a UNIX secondary groups list truncation. Before continue, some summary of the context here: A UNIX user has a primary group, defined in the passwd database, but can also be a member of many other groups, defined in the group database. A UNIX system hardcoded  a limit of 16 groups that a user can be a member of (source). This means that clients into UNIX groups only be able to access to 16 groups. Quite poor when you deal with dozens and dozens of groups.

As we already know, the problem is focused in the NFS fulfilment with the AUTH_SYS specifications, which has an in-kernel data structure where the groups a user has access to is hardcoded as an array of 16 identifiers (gids). Even though Linux now supports 65536 groups, it is still not possible to operate on more than 16 from userland.

My scenario …

at this moment, I had identified this same situation in my case. I had users assigned to more than 16 secondary groups, I had a service using a NFS for the data storage but, in addition, I had some more extra furnitures in the room:

  • Users of the service are actual UNIX accounts. The authorization to for the file accessing is delegated to the own UINIX system
  • I hadn’t got a common LDAP server sharing the uids and gids
  • The NFS service wasn’t under my control

; this last point turned my case a little bit more miserable as we will see later.

 Getting information from Internet …

first of all, a brief analysis of the situation is always welcome:

– What is the actual problem? This problem occurs when a user, who is a member of more than 16 groups, tries to access a file or directory on an nfs mount that depends on his group rights in order to be authorized to see it.  Isn’t it?
– Yes!
– So, whatever thing that you do should be starting by asking on Google. If the issue was present for all those years, the solution should be also present.
– Good idea! – I told concluding the dialog with myself.

After a couple of minutes I had a completed list of articles, mail archives, forums and blog posts which throw up all kind of information about the problem. All of them talked about the most of the points introduced up to this point in this article. More or less interesting each one, one of them sticked out respect the others. It was the solving-the-nfs-16-group-limit-problem posted article from the xkyle.com blog.

The solving-the-nfs-16-group-limit-problem article describes a similar situation and offers it own conclusions. I must admit that I am pretty aligned with these conclusions and I would recommend this post for a deep reading.

The silver bullet

This solution is the best case. If you have the control of the NFS and you are running a Linux kernel 2.6.21 at least. This kernel or newer supports a NFS feature with allows ignore the gids sent by the RPC operations, instead of uses the local gids assigned to the uid from the local server:

-g or --manage-gids
Accept requests from the kernel to map user id numbers into lists of group id numbers for use in access control. An NFS request will normally (except when using Kerberos or other cryptographic authentication) contains a user-id and a list of group-ids. Due to a limitation in the NFS protocol, at most 16 groups ids can be listed. If you use the -g flag, then the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server. Note that the 'primary' group id is not affected so a newgroup command on the client will still be effective. This function requires a Linux Kernel with version at least 2.6.21.

The key for this solution is get synchronized the ids between the client and the server. A common solution for this last requirement it is a common Name Service Switch (NSS) service. Therefore, the --manage-gids option allows the NFS server to ignore the information sent by the client and check the groups directly with the information stored into a LDAP or whatever using by the NSS. For this case, the NFS server and the NFS client must share the UIDs and GIDs.

That is the suggested approaching suggested in solving-the-nfs-16-group-limit-problem. Unfortunately, it was not my case :-(.

But not in my case

In my case, I had no way for synchronize the ids of the client with the ids of the NFS server. In my situation the ids in the client server was obtained from a Postgres database added in the NSS as one of the backends, there was not any chance to use these backend for the NFS server.

The solution

But this was not the end. Fortunately, the nfs-ngroups patchs developed by frankvm@frankvm.com expand the variable length list from 16-bit to 32-bit numeric supplemental group identifiers. As he says in the README file:

This patch is useful when users are member of more than 16 groups on a Linux NFS client. The patch bypasses this protocol imposed limit in a compatible manner (i.e. no server patching).

That was perfect! It was that I was looking for exactly. So I had to build a custom kernel patched with the right patch in the server under my control and voilá!:

wget https://cdn.kernel.org/pub/linux/kernel/v3.x/linux-3.10.101.tar.xz
wget http://www.frankvm.com/nfs-ngroups/3.10-nfs-ngroups-4.60.patch
tar -xf linux-3.10.101.tar.xz</code><code>
cd linux-3.10.101/
patch &lt; ../3.10-nfs-ngroups-4.60.patch
make oldconfig
make menuconfig
make rpm
rpm -i /root/rpmbuild/RPMS/x86_64/kernel-3.10.101-4.x86_64.rpm
dracut "initramfs-3.10.101.img" 3.10.101
grub2-mkconfig &gt; /boot/grub2/grub.cfg

Steps for CentOS, based on these three documents: [1] [2] [3]

Conclusions

As I said this post doesn’t make focus in the security stuffs. AUTH_SYS is a solution designed for the previous times before Internet. Nowadays, the total interconnection of the computer networks discourages the usage of kind methods like AUTH_SYS. It is an authentication method too much naive in the present.

Anyway, the NFS services are still quite common and many of them are still deployed with AUTH_SYS, not Kerberos or other intermediate solutions.  This post is about a specific situation in one of these deployments. Even if these services should be progressively replaced by other more secure solutions, a sysadmin should demand practical feedback about the particularities of these legacy systems.

Knowledge about the NFS 16 secondary groups limit and the different recognized workaround are still interesting from the point of view of the know-how. This post shows two solutions, even three if you consider the Kerberos choice, to fix this issue … just one of them fulfill with my requirements in my particular case.

Pablo says: “welcome ess-pipe-de to my life!”

Recently, some guy suggests me the usage of spiped instance of “SSH -L” to generate secure and more robust tunnels in peers under my control. The father of the creature is Alex Polvi (https://twitter.com/polvi) which doesn’t looks like as the new guy in the class: CEO in CoreOs Inc., previously General Manager on Rackspace, Product Manager and Sysadmin for mozilla.org. So, you can feel free to trust on spiped the next time you wish a protected peer-to-peer communication between a pair of servers:

 

To set up an encrypted and authenticated pipe for sending email between two
systems (in the author's case, from many systems around the internet to his
central SMTP server, which then relays email to the rest of the world), one
might run

# dd if=/dev/urandom bs=32 count=1 of=keyfile
# spiped -d -s '[0.0.0.0]:8025' -t '[127.0.0.1]:25' -k keyfile

on a server and after copying keyfile to the local system, run

# spiped -e -s '[127.0.0.1]:25' -t $SERVERNAME:8025 -k keyfile

at which point mail delivered via localhost:25 on the local system will be
securely transmitted to port 25 on the server.

 

Suggested post: http://www.daemonology.net/blog/2012-08-30-protecting-sshd-using-spiped.html

Original repository in github: https://github.com/polvi/spiped

Hide the VLC cone icon in the browser-plugin-vlc for Linux (Mozilla or Webkit) (Debian way)

vlc
VideoLAN’s fu***ng cone

The next instructions describes how to proceed to hide the VLC cone icon in the VLC plugin for Web browsers. I think this tip can be useful for another ninjas in so far as there is not a lot of information on Internet which describes this. Instructions are based on the Debian way and use the Debian/DPKG tools but I guess that the example is far enough explicit to be extrapolated to other environments.

Requirements:

  • You need to install all the build-dependences for the browser-plugin-vlc before execute dpkg-buildpackage -rfakeroot

Steps:

  • apt-get source browser-plugin-vlc
  • cd npapi-vlc-2.0.0/
  • edit npapi/vlcplugin_gtk.cp and replace the code as follows:
    --- npapi-vlc-2.0.0.orig/npapi/vlcplugin_gtk.cpp
    +++ npapi-vlc-2.0.0/npapi/vlcplugin_gtk.cpp
    @@ -46,12 +46,13 @@ VlcPluginGtk::VlcPluginGtk(NPP instance,
         vol_slider_timeout_id(0)
     {
         memset(&video_xwindow, 0, sizeof(Window));
    -    GtkIconTheme *icon_theme = gtk_icon_theme_get_default();
    -    cone_icon = gdk_pixbuf_copy(gtk_icon_theme_load_icon(
    -                    icon_theme, "vlc", 128, GTK_ICON_LOOKUP_FORCE_SIZE, NULL));
    -    if (!cone_icon) {
    -        fprintf(stderr, "WARNING: could not load VLC icon\n");
    -    }
    +    cone_icon = NULL;
     }
     
     VlcPluginGtk::~VlcPluginGtk()
    
  • dpkg-source –commit
  • dpkg-buildpackage -rfakeroot
  • cd ../
  • ls browser-plugin-vlc_2.0.0-2_amd64.deb

Installation:

  • dpkg -i browser-plugin-vlc_2.0.0-2_amd64.deb

Paramiko example

I have the pleasure of presenting a tip from the past. Today from long time ago: Paramiko.

import os
import paramiko
hostname="vps.doc.com"
username="admin"
password="password"
port=22
remotepath="/tmp/test"

ssh = paramiko.SSHClient()
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(hostname, port=port, username=str(username), password=str(password))
sftp = ssh.open_sftp()

remote_file = sftp.file(remotepath, "r")
remote_file.set_pipelined(True)
file_lines = remote_file.read()
return file_lines
file_lines = ...

sftp.open(remotepath, 'w').write(file_lines)
sftp.close()
ssh.close()