Note: You are looking at a static snapshot of documentation related to Robot Framework automations. The most recent documentation is at https://robocorp.com/docs

Robocorp-Hosted Cloud Worker

Run your robot in Robocorp-Hosted Cloud Worker

You need zero setups to run your Unattended Processes on our Cloud Workers.
👉 Select Cloud Worker in your process step configuration.
👉 Run the process.

Cloud Worker

Control Room handles the orchestration from there.

You can easily test things with our Portal examples; most can run on the containers.
For example, RPA Form Challenge handles data from Excel, automates a website, and provides screenshots of the results.

🚀 Certificate Course I is also an excellent place to get familiar with Cloud Workers in action.

Details about the containers

Robocorp-Hosted Cloud Workers run in secure Docker containers on AWS EC2. A container is created for the duration of the robot run and then deleted. Cloud Workers also provide scalability and parallel runs out of the box.

We base our containers on the official Debian images (version bookworm), where we only add the basic required things like the Agent Core, Chromium browser, Git, LibreOffice, and some basic tooling. You can check out the docker configuration files below for more technical details.

The containers are headless, meaning no GUI components are installed here. Still good to note that these containers can, for example, handle all browser automations and get screenshots in these cases. Linux does not need an active desktop GUI to render webpages.

Do not forget that you can load applications like Firefox within your robot that also work in the containers (example here). The applications listed in your conda.yaml get set up in isolated environments by RCC. Check out what you can find in conda-forge.

Different Cloud Worker versions

We will release the new Cloud Workers and the update cycle after June 13th, 2023.

Robocorp hosts the containers behind Cloud Workers, so we need to keep the containers up-to-date without too much hassle to the end-users or delay in case of critical security updates. For this reason, we have set up a simple update cadence and support for different container versions visible to you in Control Room.

update-cycle

NOTE: In case of a critical security update, we reserve the right to speed up the cycle.

The picture above hopefully explains the simple update cadence:

  1. Robocorp releases an update to the Cloud Worker
  2. All processes using Cloud Worker (Early Access) will automatically start to run on the new version.
  3. After two weeks, the Cloud Worker gets updated to the version that was in Early Access.
  4. Cloud Worker (Previous) is there for backup/troubleshooting
    • Users can temporarily switch to Cloud Worker (Previous).
    • Updating the robot dependencies in conda.yaml is always the first recommended action; browser updates are the most common reason for issues.
    • Notify Robocorp support using Submit issue from Control Room Step Run -view

If you need to control the update cadence or need custom dependencies, we support several options for self-hosting Worker.

👉 We also have a guide for Worker Container that guides how to create your container using our docker configurations as the base.

Ubuntu 18.04 Legacy Worker

  • Your existing processes will keep running as they are now.
  • The change described below will not change your existing processes automatically.
  • You can and should start updating your processes to run on the new Workers.

We ran Cloud Workers on top of the Ubuntu 18.04 container for three years, but the Ubuntu versions after 18.04 are moving to Snap-package installers, which do not work well with containers or automations. We cannot remain in Ubuntu 18.04 since it will stop getting security updates in June 2023.

We have hunted down and tested a new container based on Debian to stay as close to the Ubuntu containers and get security updates. All new Cloud Worker versions will be running on the new container.

Even though in our tests, the container change has not affected the robot executions, we do realize that changing the base container is a big thing and might cause problems.

👉 We will only rename the old Cloud Worker to Cloud Worker (Ubuntu 18.04 Legacy), so the existing processes will keep running as they are.

👉 You can and should update your processes from Cloud Worker (Ubuntu 18.04 Legacy) to using Cloud Worker.

Global Environment Cache for Robocorp-Hosted Cloud Workers

Because building new Python environments can take minutes, our Cloud Workers leverage the caching provided by RCC. The cloud containers where the Worker runs get a shared environment cache that builds up automatically as new unique conda.yaml files are encountered.

Environments are NOT added to the Global Cache if the conda.yaml contains:

  • ...references to private packages or direct URL references.
  • ...contains ambiguous versions for any package.

The normal environments that only use publicly available dependencies get added to the cache automatically. Typically, ~5 minutes after the first run of a new conda.yaml, the environment is available and loads quickly on container runs.

Last edit: June 12, 2023