Page tree

Note that the this documentation is for the old 5.1 release.
For the newest documentation, please see the current release documentation home.

Skip to end of metadata
Go to start of metadata


logback.xml disabling logs to stdout:

Currently our tools have by default log-entries written to stdout. To disable this create a logback.xml with the following content


<statusListener class="ch.qos.logback.core.status.NopStatusListener" />
<root level="OFF"/>


and refer to this file with


Given a specific jobID (e.g. 42) and a harvestnamePrefix this tool can be used to create a metadata-1.warc containing the CDX-entries for all (w)arc-files belonging to that job.

prerequisites and arguments

You need to specify the repositoryclient used for accessing your archived-data. If you use the default client JMSArcRepositoryClient you also need to specify the archive replica you will use (defined by setting "settings.common.useReplicaId"), the environmentname, the applicationName, the applicationInstanceId. These can all be defined on the commandline as overrides to the default values, or defined in a local settings.xml file.

Needed jarfiles in the classpath: dk.netarkivet.harvester.jar, dk.netarkivet.archive.jar (if using default repositoryclient)

The tool only has at least two arguments --jobID 42 --harvestnamePrefix 42-3

Optional argument is the -a or -w to choose the metadata format. By default the program outputs metadata warc file, but the -a option makes the program write an metadata arc file.

Sample usage of this tool

export INSTALLDIR=/home/test/netarchive
export CLASSPATH=$CLASSPATH:$INSTALLDIR/lib/netarchivesuite-archive-core.jar
export OPTS=-Ddk.netarkivet.settings.file=localsettings.xml -Dlogback.configurationFile=/path/to/logback.xml

java $OPTS [-a|-w] --jobID 42 --harvestnamePrefix 42-3

This tools enables you to create (create command), download (download command), update (update command) and show (showall command) the existing templates.

prerequisites and arguments

You need to point to a settings file with connection information for your harvest database. In a standard NAS deployment, use the INSTALLDIR/conf/settings_GUIApplication.xml

Sample usage of this tool

export INSTALLDIR=/home/test/netarchive
export CLASSPATH=$INSTALLDIR/lib/netarchivesuite-harvester-core.jar
export OPTS=-Ddk.netarkivet.settings.file=$INSTALLDIR/conf/settings_GUIApplication.xml -Dlogback.configurationFile=/path/to/logback.xml
java $OPTS <command> <args>

The different <command> <args> possibilities:

  1. create <template-name> <xml-file for this template>
  2. download [<template-name>]*
  3. update <template-name> <xml-file to replace this template>
  4. showall

Note that with the download command, you can either download all templates in one go (with no args), or select the names of the templates to download (separated by space)

This tools enables you to update the tables in the harvestdatabase to the versions required by this release of NetarchiveSuite. It should be run after installing the software, but before starting the NetarchiveSuite applications.

prerequisites and arguments

You need to point to a settings file with connection information for your harvest database. In a standard NAS deployment, use the INSTALLDIR/conf/settings_GUIApplication.xml

And the harvest database needs to be running as well.

Sample usage of this tool

First, the harvestdatabase is started, if it isn't up and running already.

Then the update tool is executed( in the above derby is used as database; if another database is used, you replace the derbyclient.jar with a different file):

export INSTALLDIR=/home/test/netarchive
export CLASSPATH=$INSTALLDIR/lib/netarchivesuite-harvester-core.jar:$INSTALLDIR/lib/derbyclient.jar
export OPTS =-Dlogback.configurationFile=/path/to/logback.xml -Ddk.netarkivet.settings.file=$INSTALLDIR/conf/settings_GUIApplication.xml
java $OPTS

Finally, the harvestdatabase is shutdown, if you're using derby as database.

This tool forces the IndexServer to create indices preemptively. This tool can be used for retrieving logs and cdx'es for previously completed harvestjobs before they are actual needed. This can be helpful if you want to improve the time it takes to generate Deduplication indices.


You need to have a IndexServerApplication online. If you use HTTP as file transport method, you probably also need to override the settings.common.remoteFile.port in order to avoid conflicts (In the example below, we have set the port number to 5000).

Furthermore all harvestjobs referred to in the CreateIndex commands must have metadata-1.arc files stored in the archive.


export INSTALLDIR=/fullpath/to/installdir
export CLASSPATH=$INSTALLDIR/lib/netarchivesuite-harvester-core.jar
export OPTS=-Dsettings.common.cacheDir=/tmp/cache \
-Dsettings.common.environmentName=QUICKSTART -Dsettings.common.remoteFile.port=5000 -Dlogback.configurationFile=/path/to/logback.xml
java $OPTS -t dedup -l 1,2

This requests a deduplication index based on the harvestjobs with id 1 and 2, and stores this index in /tmp/cache/DEDUP_CRAWL_LOG/1-2-cache



  • No labels