Child pages
  • Tools in the Archive Module
Skip to end of metadata
Go to start of metadata


This is a tool which is only run when converting from a file based administration of ArcRepository to the database based administration of ArcRepository. It takes the file and puts it into the database.


You need to have the external database running, the file must exist, and the tool must be run from the installation directory of the installation.


java -Ddk.netarkivet.settings.file=conf/settings_ArcRepositoryApplication.xml []

The optional argument is the path to the file. As default it is assumed that it is called '' and it is located in the directory where the tool is run. It is therefore only necessarry if the is in another directory or called by another name (e.g. backups/ or

This tool forces the IndexServer to create indices. This tool can be used for retrieving logs and cdx'es for previously completed harvestjobs before they are actual needed. This can be helpful if you want to improve the time it takes to generate Deduplication indices.


You need to have a IndexServerApplication online. If you use HTTP as file transport method, you probably also need to override the settings.common.remoteFile.port in order to avoid conflicts (In the example below, we have set the port number to 5000).

Furthermore all harvestjobs referred to in the CreateIndex commands must have metadata-1.arc files stored in the archive.


export INSTALLDIR=/fullpath/to/installdir
export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar
java -Dsettings.common.cacheDir=/tmp/cache -Dsettings.common.environmentName=QUICKSTART \
-Dsettings.common.remoteFile.port=5000 -t dedup -l 1,2

This requests a deduplication index based on the harvestjobs with id 1 and 2, and stores this index in /tmp/cache/DEDUP_CRAWL_LOG/1-2-cache

With this tool you can retrieve a file from your archive.


If you want to use another arcrepositoryclient than the default (dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient), you need to override the setting (


). If you do use the default, you need to set the environmentName correctly, so your ArcrepositoryApplication receives your GetFile request, and define your replicas, and the replicaId of the replica where you want to get the data. All this is most easily put into a local settings.xml:


In the setting.xml above, the environment name have been set to QUICKSTART, you only have a single replica with replicaId=SH, and the Id of the replica where you want to get the data is "SH".


export INSTALLDIR=/fullpath/to/installdir
export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar
export SETTINGSFILE=/home/user/conf/settings_ArcRepositoryApplication.xml
java -Ddk.netarkivet.settings.file=$SETTINGSFILE -Dsettings.common.remoteFile.port=5000 \ 3-metadata-1.arc

If the file 3-metadata-1.arc exists in your SH replica, the file is downloaded from the archive, and written to the current working directory. If not, you are going to wait for a long time, until the arcrepository client times out. The tool has an optional second argument, which is a destination file:

export INSTALLDIR=/fullpath/to/installdir
export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar
export SETTINGSFILE=/home/user/conf/settings_ArcRepositoryApplication.xml
java -Ddk.netarkivet.settings.file=$SETTINGSFILE -Dsettings.common.remoteFile.port=5000 \ 3-metadata-1.arc destination-file.arc

The tool "" allows one to upload ARC files to a repository of your choice.

The type of arcrepository you are uploading your files to are defined by the setting


, where the default is dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient. This client uses JMS messages to communicate with a repository.


If you use the client dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient, you need to ensure, that you send upload requests to the correct JMS queue, and that you receive the responses from the client. This is ensured by setting the setting


to the proper value (e.g. PROD or DEV). The same holds for the setting


(e.g. Upload), and finally "settings.common.applicationInstanceId" (e.g. ONE or TWO) If you intend to override any of the settings mentioned above, you can either do the overrides on the commandline or writing the overrides to a settings file.

Using the tool

This tool will upload a number of local files to all replicas in the archive. An example of an execution command is:

   java -Ddk.netarkivet.settings.file=/home/user/conf/settings_ArcRepositoryApplication.xml \
        -cp lib/dk.netarkivet.archive.jar \ \
        file1.arc [file2.arc ...]

where file1.arc file2.arc ... is the files to be uploaded

This will cause the files to be uploaded. The behaviour of the default client (JMSArcRepositoryClient) is furthermore, that if a file is uploaded successfully, it is deleted locally. This means that if there are files left after execution, these files are not known to be stored safely.

This tool takes a CDX based lucene-index, and an URI, and retrieves the corresponding ARC-record from the archive, and dumps it to stdout.


The same as for getFile.


export INSTALLDIR=/fullpath/to/installdir
export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar
export SETTINGSFILE=/home/user/conf/settings_ArcRepositoryApplication.xml
export LUCENE_INDEX=/tmp/cache/DEDUP_CRAWL_LOG/1-cache
export URI=
java -Ddk.netarkivet.settings.file=$SETTINGSFILE -Dsettings.common.remoteFile.port=5000 \ $LUCENE_INDEX $URI

If the URI is not in the given index, an exception is sent to stdout with the message: Resource missing in index or repository for URI TODO: Mention how to make an luceneindex for your stored arcfiles.

The bitarchives are designed to receive batch-programs to run on all the arc-files stored in the bitarchive. This is true no matter whether the bitarchive is installed as a local arc-repository or a distributed repository with several bitarchives. Batch programs are also used internally by the NetarchiveSuite software to do specific tasks like getting a CDX'es for a specific job, or checksums of arc-files stored in the bitarchive, or lists of arc-files from the bitarchive.

The RunBatch program is used to send your own batchjobs to the bitarchives.

Note that a batchjob will only be sent to one bitarchive replica

It is not possible to send batchjobs to checksum replicas.
Only bitarchive replicas can handle batchjobs.

Prerequisites for running a batch job

A number of prerequisites must be taken care of before a batch job can be executed. These are:

  • *Setting file:\* (file:*) must be present and must include declarations of at least the following setttings:
  • Replicas to identify the replica you want to communicate with:
  • *settings.common.replicas*in order for the batch program to identify and messages to the bitarchive.
  • *settings.common.useReplicaId*in order to determine default bitarchive replica to use.
  • Channel settings to be able to make channel names to communicate with running system:
  • settings.common.environmentName(typically PROD)
  • settings.common.applicationName(RunBatchApplication, but currently set automatically)
  • Other settings related to communication where the running systems settings differs from default.
  • Batch program: The batch program must be designed as a Java class that extend ARCBatchJob or FileBatchJob depending on whether you want to make a batch program over arc records or a batch program over files.
  • Call location: The RunBatch program can be started from any of the machines in the distributed system where the system runs.
  • Disk space requirement on bitarchive: The disk space needed will depend on the batch program concerned. As an example the ChecksumJob produces about 100 bytes per arc-file, whereas a batch program writing the full contents of arc-files would require as much space as the archive it self.
  • Class Path: Running RunBatch requires *lib/dk.netarkivet.archive.jar*in the class path
  • Memory space on bitarchive: The memory space needed will depend on the written batch program. If the batch program is written using a lot of jar files, these files will be needed to be kept in memory while the batch program is running, and on top of that comes the memory requirenments for the batch job it self.
  • Timeout on bitachive monitor: To set an specific timeout for a concrete BatchJob, its needed to override 'protected long batchJobTimeout = -1;' in Otherwise the default timeout is 14 days.

    Execution and Arguments

    The execution of a batch program is done by calling the **program with the following arguments:

If the batch program is given in a single class file, this must be specified in the parameter:

  • *-C<classfile>*is a file containing a FileBatchJob/ARCBatchJob implementation
    If the batch program is given in one or more jar files, this must be specified in the parameters:
  • *-N<className>*is the name of the primary class to be loaded and executed as a FileBatchJob/ARCBatchJob implementation
  • *-J<jarfile>*is on or more files containing all the classes needed by the primary class. The files must be comma separated.
    To specify which files the batch program must be executed on, the following parameters may be set optionally
  • -B<replica>*is the name of the bitarchive replica which the batchjob must be executed on. The default is the name of the bitarchive replica identified by the setting *settings.common.useReplicaId. Note that it is the replica name and not replica id which are refered to here. Also it cannot be the name of a checksum replica, since batchjob can only be executed on bitarchive replicas.
  • -R<regexp>*is a regular expression that will be matched against file names in the archive. The default is *.*which means it will be executed on all files in the bitarchive replica.
    To specify output files from the batch program, the following parameters may be set optionally
  • -O<outputfile>*is a file where the output from the batch job will be written. By default, it goes to *stdout, but it will be mixed with other output to stdout.
  • *-E<errorFile>*is a file where the errors from the batch job will be written. By default, it goes to stderr.
    An example of an execution command is:
   java -Ddk.netarkivet.settings.file=/home/user/conf/settings_ArcRepositoryApplication.xml \
        -cp lib/dk.netarkivet.archive.jar \ \
        -CFindMime.class -R10-*.arc -BReplicaOne -Oresfile

which will take in lib/dk.netarkivet.archive.jar*in the class path and execute the general NetarchiveSuite program **based on settings from file */home/user/conf/settings_ArcRepositoryApplication.xml. This will result in running the batch program FindMime.class*on the bitarchive replica named *ReplicaOne, but only on files with names matching the pattern 10-.arc*. The results written by the batch program is concatenated and placed in the output file named resfile.

Example of packing and executing a batch job

To package the files do the following:

jar -cvf batchfile.jar path/batchProgram.class

where *path*is the path to the directory where the batch class files are placed. This is under the *bin/*directory in the eclipse project. The *batchProgram.class*is the compiled file for your batch program.

The call to run this batch job is then:

  java -Ddk.netarkivet.settings.file=conf/settings_ArcRepositoryApplication.xml \
        -cp lib/dk.netarkivet.archive.jar \ \
       -Jbatch.jar -Npath.batchProgram

where path*in the *-N*argument has all *'/'*changed to *'.'.

E.g. to run the batch job from the file myBatchJobs/arc/, which inherits the ARCBatchJob class (dk/netarkivet/common/utils/arc/ARCBatchJob), do the following.

  • cd bin/- Place yourself in the bin/ folder under your project.
  • jar cvf batch.jar myBatchJobs/arc/* Package the compiled Java binaries into an .jar file.
  • mv batch.jar ~/NetarchiveSuite/.- Move the packaged batch job to your NetarchiveSuite directory.
  • cd ~/NetarchiveSuite/- Go to your NetarchiveSuite directory.
  • Run the following command to execute the batch job:
       java -Ddk.netarkivet.settings.file=conf/settings_ArcRepositoryApplication.xml \
            -cp lib/dk.netarkivet.archive.jar:lib/dk.netarkivet.common.jar
   -Jbatch.jar -NmyBatchJobs.arc.MyArcBatchJob
    The lib/dk.netarkivet.common.jar*library need to be included in the classpath since the batch job (*myBatchJobs/arc/MyArcBatchJob) inherits from a class within this library (dk/netarkivet/common/utils/arc/ARCBatchJob).


If the security properties for the bitarchive (independent of this execution) are set as described in the Configuration Manual the batch program will not be allowed to:

  • to write files to the bitarchive
  • to change files in the bitarchive
  • to delete files in the bitarchive

  • No labels