Configure Channel Names
Channels are used for communication between applications. There are defined a set of different channel names based on the following settings:
. This setting is used as prefix to all channel names created in a NetarchiveSuite installation, and must be the same for all applications in the same installation. Note that this means that several installations can be installed on the same machines as long as their environment name is different (e.g. for test installations). The value for the environmentName setting must not contain the character '_'.
. This setting is used to distinguish channels when there are more than one of the same application running on the same machine, e.g. when more harvesters are running on the same machine or more bitarchive applications are running on the same machine. Note that also tools like RunBatch and Upload need a distinctt application Instance Id in order to avoid channel name clashes with other applications when communicating with the bitarchives.
. This setting is used to choose the channels for a specific bitarchive in a distributed archive installation. The Replica Id specified must match one of the bitarchive replicas in the
settings.common.replicassettings. Note that if there is only one bitarchive (or a simple repository installation on local disc) the default values will be sufficient.
Note that some channel names also include the IP address of the machine where the application is running. This is not part of the settings, but ensures that applications on different machines do not share channels when they are not meant to.
For further information, see JMS Channels.
Parts of the NetarchiveSuite code allow plugging in your own Java implementation, or selecting between different implementations provided by NetarchiveSuite.
When this is done it has two implications on settings:
- You need to set the implementing class with a setting (these settings always end in
- The plug-in may specify extra settings for that plug-in
For list of different plug-ins in the NetarchiveSuite package please refer to Appendix A - Plug-ins in NetarchiveSuite.
For more details on how to extend the system with pluggable classes with their own settings, please see the System Design on plugins.
NetarchiveSuite can send notifications of serious system warnings or failures to the system-owner by email. This is implemented using the Notifications plug-in (see also Appendix A - Plug-ins in NetarchiveSuite). Several settings in the settings.xml can be changed for this to work:
settings.common.notifications.receiver (recipient of notifications),
settings.common.notifications.sender (the official sender of the email, and receiver of any bounces), and
settings.common.mail.server (the proper mail-server to use):
Alternatively, the class
dk.netarkivet.common.utils.PrintNotifications can be used. This will simply print the notifications to stderr on the terminal.
Configure a File Data Transfer Method
The data transfer method can be configured as a plug-in (see also Appendix A - Plug-ins in NetarchiveSuite).
You can currently choose between FTP, HTTP, or HTTPS as the filetransfer method. The HTTP transfer method uses only a single copy per transfer, while the FTP method first copies the file to an FTP server and then copies it from there to the receiving side. Additionally, the HTTP transfer method reverts to simple filesystem copying whenever possible to optimize transfer speeds. However, to use HTTP transfers you must have ports open into most machines, which some may consider a security risk. The HTTPS transfer method meets this problem by having the HTTP communication secured and encrypted. To use the HTTPS transfer method you will need to generate a certificate that is needed to contact the embedded HTTPS server.
The FTP method requires one or more FTP-servers installed. (See Installing and configuring FTP for further details). The XML below is an example settings.xml, in which you have to replace serverName, userName, userPassword with proper values. This must be set for all applications to use FTP remote files.
It is possible to use more than one FTP server, but each application can only use one. The FTP server that is used for a particular transfer is determined by the application that is sending a file. If you want to use more than one FTP-server, you must use different settings for serverName (e.g. FTP-server1) and possibly also the userName (e.g. ftpUser) and userPassword (e.g. ftpPassword) when starting the applications.
Using HTTP as filetransfer method, you need to reserve a HTTP port on each machine per application. You can do this by setting the
settings.common.remoteFile.port to e.g.
The following XML shows the the corresponding syntax in the
Using the HTTPS file transfer method, you first need to generate a certificate that is used for communication. You can do this with the
keytool application distributed with Sun Java 5 and above.
Run the following command:
It should the respond with the following:
Enter the password for the keystore.
The keytool will now prompt you for the following information
Answer all the questions, and end with "yes".
Finally you will be asked for the certificate password.
Answer with a password for the certificate.
You now how a file called
keystore which contains a certificate. This keystore needs to be available for all NetarchiveSuite applications, and referenced from settings as the following example shows:
To keep your environment secure, you should make sure that the keystore and settings file ''only'' are readable for the user running the application.
Configure a JMS broker
The data transfer method can be configured as a plug-in (see also Appendix A - Plug-ins in NetarchiveSuite).
In the below configuration, the JMSbroker resides at
localhost, and listens for messages on port 7676.
You must also select a JMS environment name corresponding to the
environmentName NetarchiveSuite setting (see Common part). This allows you have more than one running installation of the NetarchiveSuite, each with its own
environmentName. This also makes it easy to clean-up the JMS queues associated with a given
The NetarchiveSuite currently only supports one kind of JMS broker, so only the 'broker','port', and 'environmentName' can be changed.
The repository is configured as a simple local repository or a complex distributed repository having a distributed bitarchive replicas.
A simple repository can be configured as a plug-in using
dk.netarkivet.common.distribute.arcrepository.LocalArcRepositoryClient for the
settings.common.arcrepositoryClient.class (see also Appendix A - Plug-ins in NetarchiveSuite).
for a more complex distributed repository with at least two replicas, the settings for replicas must be defined. In this example we look at two bitarchive replicas, here called ReplicaOne and ReplicaTwo.
The following is an example of settings for a repository with two bitarchive replicas
For applications that needs to communicate with one of the replicas, the
useReplicaId must be set. The
useReplicaId is used to point at which of the replicas that by default is used e.g. for execution of batch jobs – typically the Replica with the greater amount of processing power and/or minimal size of storage space per bitarchive application.
Furthermore the common replica definition should conform to settings for corresponding bitarchive applications and bitarchive monitors, i.e. the
useReplicaId must correspond to the replica that it is representing.
The scheduling takes place every one minute, unless the previous scheduling is not finished yet. The scheduling interval cannot be changed. Scheduling amounts to searching for active harvestdefinitions, that is ready to have jobs generated, and subsequently submitted for harvesting. The job-generation procedure are governed by a set of settings prefixed by ''settings.harvester.scheduler.''. These settings rule how large your crawljobs are going to be, and how long time they will take to complete. Note that harvestdefinitions consist of at least one !DomainConfiguration, (containing a Heritrix setup, and a seed-list), and that there are two kinds: Snapshot Harvestdefinitions, and Selective Harvestdefinitions.
During scheduling, each harvest is split into a number of crawl jobs. This is done to keep Heritrix from using too much memory and to avoid that particularly slow or large domains cause harvests to take longer than necessary. In the job splitting part of the scheduling, the scheduler partitions a large number of DomainConfigurations into several crawljobs. Each crawljob can have only one Heritrix setup, so DomainConfigurations with different Heritrix setups will be split into different crawljobs. Additionally, a number of parameters influence what configurations are put into which jobs, attempting to create jobs that cover a reasonable amount of domains of similar sizes.
If you don't want to have the harvests split into multiple jobs, you just need to set each of
to a large number, such as MAX_LONG. Initially, we suggest you don't change these parameters, as the way they work together is subtle. Harvests will always be split in different jobs, though, if they are based on different order.xml templates, or if different harvest limits need to be enforced.
settings.harvester.scheduler.errorFactorPrevResult: Used when calculating expected size of a harvest of some domain during the job-creation process for snapshot harvests. This defines the factor by which we maximally allow domains that have previously been harvested to increase in size, compared to the value we estimate the domain to be. In other words, it defines how conservative our estimates are. The default value is 10, meaning that the maximum number of bytes harvested is as most 10 times as great as the value we use as expected size.
settings.harvester.scheduler.errorFactorBestGuess: Used when calculating expected size of a harvest of some domain during job-creation process for a snapshot Harvests. This defines the factor by which we maximally allow domains that have previously been incompletely harvested or not harvested at all to increase in size, compared to the value we estimate the domain to be. In other words, it defines how conservative our estimates are. The default value is 20, meaning that the maximum number of bytes harvested is as most 20 times as great as the value we use as expected size. This is probably an unreasonable number, it should be reset to 2 for most installations.
settings.harvester.scheduler.expectedAverageBytesPerObject: How many bytes the average object is expected to be on domains where we don't know any better. This number should grow over time, as of end of 2005 empirical data shows 38000. Default is 38000.
settings.harvester.scheduler.maxDomainSize: Initial guess of #objects in an unknown domain. Default value is 5000
settings.harvester.scheduler.jobs.maxRelativeSizeDifference: The maximum allowed relative difference in expected number of objects retrieved in a single job definition. Set to MAX_LONG for no splitting.
settings.harvester.scheduler.jobs.minAbsoluteSizeDifference: Size differences for jobs below this threshold are ignored, regardless of the limits for the relative size difference. Set to MAX_LONG for no splitting. Default value is 2000.
settings.harvester.scheduler.jobs.maxTotalSize: When this limit is exceeded no more configurations may be added to a job. Set to MAX_LONG for no splitting. Default value is 2000000
settings.harvester.scheduler.configChunkSize: How many domain configurations we will process in one go before making jobs out of them. This amount of domains will be stored in memory at the same time. Set to MAX_LONG for no job splitting. The default value is 10000.
MAX_LONG refers to the number 2^63-1 or 9223372036854775807.
Configure Domain Granularity
The NetarchiveSuite software is bound to the concept of Domains, where a Domain is defined as
This concept is useful for grouping harvests with regard to specific domains.
It can be configured what is considered a TLD by changing the settings files. The settings file currently distributed with the NetarchiveSuite software will list all country-level top-level-domains as "tld"s like ".dk", ".se" and ".no". However, as a proof of concept, for ".uk"-domains, there is listed the pseudo-top-level-domains ".co.uk", ".gov.uk", ".edu.uk" and some more.
Currently, only grouping by domain suffix is supported, see NAS-1637 Plugin of Domain definition suggested.
Configure Heritrix process
In this section the configuration for running Heritrix processes via NetarchiveSuite is described. For details on managing heritrix harvest templates (order.xml), please refer to Appendix B - Managing Heritrix Harvest Templates (order.xml).
The communication between NetarchiveSuite and Heritrix is handled by the settings.harvester.harvesting.heritrixController.class plugin (see also Appendix A - Plug-ins in NetarchiveSuite). However, only one supported implementation is bundled with NetarchiveSuite, the JMXHeritrixController.
Each harvester runs an instance of Heritrix for each harvest job being executed. It is possible to get access to the Heritrix web user interface for purposes of pausing or stopping a job, examining details of an ongoing harvest or even, if necessary, change an ongoing harvest. Note that some changes to harvests, especially those that change the scope and limits, may confuse the harvest definition system. We suggest using the Heritrix UI only for examination and pausing/terminating jobs.
Each harvest ''application'' running requires two ports,
- one for the user interface
. The user interface port is set by the
settings.harvester.harvesting.heritrix.guiPortsetting, and should be open to the machines that the user interface should be accessible from. Make sure to have different ports for each harvest application if you're running more than one on a machine. Otherwise, your harvest jobs will fail when two harvest applications happen to try to run at the same time – an error that could go unnoticed for a while, but which is more likely to happen exactly in critical situations where more harvesters are needed.
- one for JMX (Java Management Extensions which communicates with Heritrix).
. The JMX port is set by the
settings.harvester.harvesting.heritrix.jmxPortsetting, and does not need to be open to other machines.
The Heritrix user interface is accessible through a browser using the port specified, e.g. http://my.harvester.machine:8090, and entering the administrator name and password set in the
In order for the harvester application to communicate with Heritrix there need to be a username and password for the JMX controlRole which is used for this communication. This username and password must be in the settings
settings.harvester.harvesting.heritrix.jmxPassword. These also need to be inserted for the corresponding values in the
conf/jmxremote.password file (template [examples/jmxremote_template.password|https://sbforge.org/svn/netarchivesuite/trunk/examples/jmxremote_template.password]). Here you find has them on line
Example of the above mentioned settings is given here:
It is also possible to use JConsole to access the JMX interface of the Heritrix process.
The final setting for the Heritrix processes is the amount of heap space each process is allowed to use. Since Heritrix uses a significant amount of heap space for seen URLs and other stuff, it is advisable to keep the
settings.harvester.harvesting.heritrix.heapSize setting at at least its default setting of 1.5G if there is enough memory in the machine for this (remember to factor in the number of harvesters running on the machine – swapping will slow the crawl down ''significantly'').
Configure web page look
The look of the web pages can be changed by changing files in the webpages directory. The files are distributed in war-files, which are simply zip-files. They can be unpacked to customize styles, and repacked afterwards using zip. Each of the five war files under webpages corresponds to one section of the web site, as seen in the left-hand menu. The two PNG files transparent_logo.png and transparent_menu_logo.png are used on the front page and atop the left-hand menu, respectively. They can be altered to suite your whim, but the width of transparent_menu_logo.png should not be increased so much that the menu becomes overly wide. The color scheme for each section is set in the netarkivet.css file for that section and can be changed to suit your whim, though we recommend changing them all at the same time to provide a uniform look.
Security in NetarchiveSuite is mainly defined in the
examples/security_template.policy file. This file controls two main configurations: Which classes are allowed to do anything (core classes), and which classes are only allowed to read the files in the bit archive (third-party batch classes). It is recommended that you fit this template to your own requirements, and store in a CVS/SVN repository locally, as we do at the Netarkivet.
To enable the use of the security policy, you will need to launch your applications with the command line options
-Djava.security.policy=examples/security_template.policy. (Note: In NetarchiveSuite 3.8.*, the bundled security template was placed in
conf/ and named
For the core classes, we need to identify all the classes that can be involved. The default security.policy file assumes that the program is started from the root of the distribution. If that is not the case, the codeBase entries must be changed to match. The following classes should be included:
- The dk.netarkivet.* jar files and supporting jar files, located in the
libdirectory. By default, all files in this directory and its subdirectories are included by the statement
- The heritrix jar files and supporting jar files for it, usually located in the
lib/heritrix/libdirectory. By default, these are included by the above.
- The standard Java classes, which by default are included by the statement
- The classes compiled by JSP as part of the web interface. These classes only exist on the machine(s) that run a web interface, and are found in the directory specified by the
settings.common.tempDirsetting. The default security file contains entries that assume this directory is
tests/commontempdir. Note that an entry is required for each section of the web site:
If you change the
settings.common.tempDir setting, you will need to change this entry, too, or the web pages won't work.
The default security.policy file includes settings that allow third-party batch jobs to read the bitarchives set up for the [[Quick Start Manual 3.16]] system. In a real installation, the bitarchive machines must specify which directories should be accessible and set up permissions for these. The default setup is:
Notice how these permissions are not granted to a specific codebase, but the permissions given are very restrictive: The classes can read files in two explicitly stated directories, and can query for the value of the
settings.archive.bitarchive.useReplicaId setting – all other settings are off-limits, as is reading and writing other files, including temporary files. If you wish to allow third-party batch jobs to do more, think twice first – loopholes can be subtle.
Configure monitoring (allocating JMX and RMI ports)
Monitoring the deployed NetarchiveSuite relies on JMX (Java Management Extensions). Each application in the NetarchiveSuite needs its own JMX-port and associated RMI-port, so they can be monitored from the NetarchiveSuite GUI with the StatusSiteSection, and using jconsole (see below). You need to select a range for the JMX-ports. In the example below, the chosen JMX/RMI-range begins at 8100/8200. It is important that no two applications on the same machine use the same JMX and RMI ports!
On each machine you need to set the JMX and RMI ports, using the settings
Firewall Note: This requires that the admin-machine has access to each machine taking part in the deployment on ports 8100-8300.
You need to select a username and password for the monitor JMX settings. This username and password must be updated in the settings
The applications which uses Heritrix (the Harvester) need to have the username and password for the Heritrix JMX settings. This username and password must be updated in the settings
These username and password values must be inserted in the
conf/jmxremote.password file and the
conf/jmxremote.access file. A template for these files is placed in the examples directory (jmxremote_template.access and jmxremote_template.password)
Currently, all applications must use the same password.
The applications will automatically register themselves for monitoring at the GUI application, if the StatusSiteSection is deployed. All important log messages (Log level INFO and above) can be studied in the GUI. However, only the last 100 messages from each application instance are available. This number can be increased or decreased using the setting
These will give the following settings in the jmxremote.password file:
And the following privileges in the jmxremote.access file:
Configure ArcRepository and BitPreservation Database
The ArcRepositoryApplication and the BitPreservation actions available from the GUIApplication can either use files or a database. As default it is set to use files, but it can be changed to database with the following settings:
These parameters will give the following database URL: jdbc:derby://localhost:1527/archiveDB
If a specific URL is wanted (e.g. another database type than derby), then it should be assigned to the baseUrl and the 'machine', the 'port' and the 'dir' should be set to the empty string, e.g.:
It requires a database installed in the directory 'archiveDB' under the installation directory on the machine containing both the ArcRepositoryApplication and the GUIApplication.
If the database is to be installed through Deploy the following parameter should be added to the relevant deployMachine entity:
Examples of deploy configuration files
The following example of configuration file requires adaptation to your own system before use.
The instance with two replicas divided over two physical locations Each physical locations contain several machines Bitarchive machines, harvester machine and viewerproxy machine Only one physical location has an administator machine, which contains the GUI application, the Bitarchive monitors, the HarvestJobManager, HarvestJobMonitor and the arc repository.