logback.xml disabling logs to stdout:
Currently our tools have by default log-entries written to stdout. To disable this create a logback.xml with the following content
and refer to this file with
This is a tool which is only run when converting from a file based administration of ArcRepository to the database based administration of ArcRepository. It takes the admin.data file and enters its data into the database.
You need to have the external database running, the admin.data file must exist, and the tool must be run from the installation directory of the installation.
The optional argumentis the path to the admin.data file. By default it is assumed that it is called 'admin.data' and it is located in the directory where the tool is run. The argument is therefore only necessary if the admin.data is in another directory or called by another name (e.g. backups/admin.data or admin.data.backup).
This tool forces the IndexServer to create indices preemptively. This tool can be used for retrieving logs and cdx'es for previously completed harvestjobs before they are actual needed. This can be helpful if you want to improve the time it takes to generate Deduplication indices.
You need to have a IndexServerApplication online. If you use HTTP as file transport method, you probably also need to override the settings.common.remoteFile.port in order to avoid conflicts (In the example below, we have set the port number to 5000).
Furthermore all harvestjobs referred to in the CreateIndex commands must have metadata-1.arc files stored in the archive.
This requests a deduplication index based on the harvestjobs with id 1 and 2, and stores this index in /tmp/cache/DEDUP_CRAWL_LOG/1-2-cache
With this tool you can retrieve a file from your archive.
If you want to use another arcrepositoryclient than the default (dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient), you need to override the setting
If you use the default, you need to set the environmentName correctly, so your ArcrepositoryApplication receives your GetFile request, and define your replicas, and the replicaId of the replica where you want to get the data. All this is most easily put into a local settings.xml:
In the setting.xml above, the environment name have been set to QUICKSTART, you only have a single replica with replicaId=SH, and the Id of the replica where you want to get the data is "SH".
If the file 3-metadata-1.arc exists in your SH replica, the file is downloaded from the archive, and written to the current working directory. If not, you are going to wait for a long time, until the arcrepository client times out. The tool has an optional second argument, which is a destination file:
The tool "dk.netarkivet.archive.tools.Upload" allows one to upload ARC files to a repository of your choice.
The type of arcrepository you are uploading your files to are defined by the setting
, where the default is dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient. This client uses JMS messages to communicate with a repository.
If you use the client dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient, you need to ensure, that you send upload requests to the correct JMS queue, and that you receive the responses from the client. This is ensured by setting the setting
to the proper value (e.g. PROD or DEV). The same holds for the setting
(e.g. Upload), and finally "settings.common.applicationInstanceId" (e.g. ONE or TWO) If you intend to override any of the settings mentioned above, you can either do the overrides on the commandline or writing the overrides to a settings file.
Using the tool
This tool will upload a number of local files to all replicas in the archive. An example of an execution command is:
where file1.arcis the files to be uploaded
This will cause the files to be uploaded. The behaviour of the default client (JMSArcRepositoryClient) is furthermore, that if a file is uploaded successfully, it is deleted locally. This means that if there are files left after Upload is finished, these files are probably not stored safely.
This tool takes a CDX based lucene-index, and an URI, and retrieves the corresponding ARC-record from the archive, and dumps it to stdout.
The same as for getFile.
If the URI is not in the given index, an exception is sent to stdout with the message: Resource missing in index or repository for URI
TODO: Mention how to make an luceneindex for your stored arcfiles.
The bitarchives are designed to receive batch-programs to run on all the arc-files stored in the bitarchive. This is true no matter whether the bitarchive is installed as a local arc-repository or a distributed repository with several bitarchives. Batch programs are also used internally by the NetarchiveSuite software to do specific tasks like getting a CDX'es for a specific job, or checksums of arc-files stored in the bitarchive, or lists of arc-files from the bitarchive.
The RunBatch program is used to send your own batchjobs to the bitarchives.
Note that a batchjob will only be sent to one bitarchive replica!
It is not possible to send batchjobs to checksum replicas, as only bitarchive replicas can
Prerequisites for running a batch job
A number of prerequisites must be taken care of before a batch job can be executed. These are:
- A Settings file must be present and must include declarations of at least the following setttings:
- Replicas to identify the replica you want to communicate with:
- settings.common.replicas in order for the batch program to identify and messages to the bitarchive.
- settings.common.useReplicaId in order to determine default bitarchive replica to use.
Channel settings to be able to make channel names to communicate with running system:
- settings.common.environmentName (typically PROD)
- settings.common.applicationName (RunBatchApplication, but currently set automatically)
Other settings related to communication where the running systems settings differs from default.
- Batch program: The batch program must be designed as a Java class that extend ARCBatchJob or FileBatchJob depending on whether you want to make a batch program over arc records or a batch program over files.
- Call location: The RunBatch program can be started from any of the machines in the distributed system where the system runs.
- Disk space requirement on bitarchive: The disk space needed will depend on the batch program concerned. As an example the ChecksumJob produces about 100 bytes per arc-file, whereas a batch program writing the full contents of arc-files would require as much space as the archive it self.
- Class Path: Running RunBatch requires lib/netarchivesuite-archive-core.jar in the class path
- Memory space on bitarchive: The memory space needed will depend on the written batch program. If the batch program is written using a lot of jar files, these files will be needed to be kept in memory while the batch program is running, and on top of that comes the memory requirements for the batch job it self.
- Timeout on bitachive monitor: To set an specific timeout for a concrete BatchJob, you need to override the variable 'protected long batchJobTimeout = -1;' in FileBatchJob.java. Otherwise the default timeout is 14 days.
Execution and ArgumentsThe execution of a batch program is done by calling the dk.netarkivet.archive.tools.RunBatch program with the following arguments:
If the batch program is given in a single class file, this must be specified in the parameter:
- -C<classfile> is a file containing a FileBatchJob/ARCBatchJob implementation
If the batch program is given in one or more jar files, this must be specified in the parameters:
- -N<className> is the name of the primary class to be loaded and executed as a FileBatchJob/ARCBatchJob implementation
- -J<jarfile> is on or more files containing all the classes needed by the primary class. The files must be separated by commas.
To specify which files the batch program must be executed on, the following parameters may be set optionally:
- -B<replica> is the name of the bitarchive replica which the batchjob must be executed on. The default is the name of the bitarchive replica identified by the setting settings.common.useReplicaId. Note that it is the replica name and not replica id which are refered to here. Also it cannot be the name of a checksum replica, since batchjob can only be executed on bitarchive replicas.
- -R<regexp> is a regular expression that will be matched against file names in the archive. The default is *.* which means it will be executed on all files in the bitarchive replica.
To specify output files from the batch program, the following parameters may be set optionally
- -O<outputfile> is a file where the output from the batch job will be written. By default, it goes to stdout, but it will be mixed with other output to stdout.
- -E<errorFile> is a file where the errors from the batch job will be written. By default, it goes to stderr.
An example of an execution command is:
which will take in lib/netarchivesuite-archive-core.jar in the class path and execute the general NetarchiveSuite program dk.netarkivet.archive.tools.RunBatch based on settings from file /home/user/conf/settings_ArcRepositoryApplication.xml. This will result in running the batch program FindMime.class on the bitarchive replica named ReplicaOne, but only on files with names matching the pattern
The results written by the batch program is concatenated and placed in the output file named resfile.
Example of packing and executing a batch job
To package the files do the following:
where PATH is the path to the directory where the batch class files are placed. This is under the bin/ directory in the eclipse project. The batchProgram.class is the compiled file for your batch program.
The call to run this batch job is then:
where path in the -N argument has all '/' changed to '.'.
E.g. to run the batch job from the file myBatchJobs/arc/MyArcBatchJob.java, which inherits the ARCBatchJob class (dk/netarkivet/common/utils/arc/ARCBatchJob), do the following.
1. Place yourself in the bin/ folder under your project:
2. Package the compiled Java binaries into an .jar file:
3. Move the packaged batch job to your NetarchiveSuite directory.
4. Run the following command to execute the batch job:
The lib/netarchivesuite-common-core.jar library need to be included in the classpath since the batch job (myBatchJobs/arc/MyArcBatchJob) inherits from a class within this library (dk/netarkivet/common/utils/arc/ARCBatchJob).
If the security properties for the bitarchive (independent of this execution) are set as described in the Configuration Manual the batch program will not be allowed:
- to write files to the bitarchive
- to change files in the bitarchive
- to delete files in the bitarchive