The British Library have set up a workflow using phantomjs to harvest screenshot and dom-images during harvesting and to identify loaded urls for queueing in heritrix.
BL has developed two components which need to be installed in this workflow. Webrender-phantomjs is a REST-webservice facade to phantomjs, written in python/django and most-easily deployed in a gunicorn container. Webrender-har-daemon is a python daemon service which fetches urls from a rabbitmq and forwards them to webrender-phantomjs. Webrender-ahr-daemon also accepts the returned HAR data from phantomjs and saved it in warc files and passes any found urls back to heritrix either via rabbitmq or by writing them directly into heritrix action directory.
Deploying these components is not entirely trivial, but doesn't involve any insurmountable challenges either. In Ubuntu, phantomjs itself is available in the standard software channel. Webrender-phantomjs is available from https://github.com/ukwa/webrender-phantomjs . There is a Dockerfile which could be used for installation, but if you don't want to use that it's still the best documentation of the dependencies needed - openssl-devel, libjpeg-devel, pip, django etc. It would be useful if someone (ie me) created a recipe for installing the system from scratch, ideally using python virtualenv to create a localised installation.
Webrender-phantomjs is configured mostly in the file webrender-phantomjs/webrender/phantomjs/settings.py . The file gunicorn.ini can also be edited to bind the webservices to e.g. your host-interface instead of the loopback interface. The Readme has instructions for starting the service using either manage.py or gunicorn. I only got gunicorn to work. Once done, try loading a url like http://pc609.sb.statsbiblioteket.dk:8000/webtools/urls/http://www.netarkivet.dk or http://pc609.sb.statsbiblioteket.dk:8000/webtools/image/http://www.netarkivet.dk to get either a list of urls or a screenshot from netarkivet.
Webrender-har-daemon is easier to get started with. It also has a settings file which has pointers to the rabbit-queue and the webrender-phantomjs endpoint, as well as directory paths to where you want the warcfiles to end up.
If you can't be bothered run an actual harvest, you can test webrender-har-daemon from the command line using the queue-url tool from umbra:
This dumps the relevant HAR-data in a warcfile: