public class DigestIndexer extends Object
The indexing can be done via the command line options (Run with --help parameter to print usage information) or natively embedded in other applications.
This class also defines string constants for the lucene field names.
Modifier and Type | Field and Description |
---|---|
static String |
FIELD_DIGEST
The content digest as String.
|
static String |
FIELD_ETAG
The document's etag.
|
static String |
FIELD_ORIGIN
A field containing meta-data on where the original version of a document is stored.
|
static String |
FIELD_TIMESTAMP
The URLs timestamp (time of fetch).
|
static String |
FIELD_URL
The URL.
|
static String |
FIELD_URL_NORMALIZED
A stripped (normalized) version of the URL.
|
static String |
MODE_BOTH
Both URL and hash are indexed.
|
static String |
MODE_HASH
Index HASH enabling lookups by hash (content digest).
|
static String |
MODE_URL
Index URL enabling lookups by URL.
|
Constructor and Description |
---|
DigestIndexer(String indexLocation,
String indexingMode,
boolean includeNormalizedURL,
boolean includeTimestamp,
boolean includeEtag,
boolean addToExistingIndex)
Each instance of this class wraps one Lucene index for writing deduplication information to it.
|
Modifier and Type | Method and Description |
---|---|
void |
close()
Close the index.
|
org.apache.lucene.index.IndexWriter |
getIndex() |
static void |
main(String[] args) |
static String |
stripURL(String url)
An aggressive URL normalizer.
|
long |
writeToIndex(CrawlDataIterator dataIt,
String mimefilter,
boolean blacklist,
String defaultOrigin,
boolean verbose)
Writes the contents of a
CrawlDataIterator to this index. |
long |
writeToIndex(CrawlDataIterator dataIt,
String mimefilter,
boolean blacklist,
String defaultOrigin,
boolean verbose,
boolean skipDuplicates)
Writes the contents of a
CrawlDataIterator to this index. |
public static final String FIELD_URL
public static final String FIELD_DIGEST
public static final String FIELD_TIMESTAMP
public static final String FIELD_ETAG
public static final String FIELD_URL_NORMALIZED
public static final String FIELD_ORIGIN
public static final String MODE_URL
public static final String MODE_HASH
public static final String MODE_BOTH
public DigestIndexer(String indexLocation, String indexingMode, boolean includeNormalizedURL, boolean includeTimestamp, boolean includeEtag, boolean addToExistingIndex) throws IOException
indexLocation
- The location of the index (path).indexingMode
- Index MODE_URL
, MODE_HASH
or MODE_BOTH
.includeNormalizedURL
- Should a normalized version of the URL be added to the index. See
stripURL(String)
.includeTimestamp
- Should a timestamp be included in the index.includeEtag
- Should an Etag be included in the index.addToExistingIndex
- Are we opening up an existing index. Setting this to false will cause any index at
indexLocation
to be overwritten.IOException
- If an error occurs opening the index.public org.apache.lucene.index.IndexWriter getIndex()
public long writeToIndex(CrawlDataIterator dataIt, String mimefilter, boolean blacklist, String defaultOrigin, boolean verbose) throws IOException
CrawlDataIterator
to this index.
This method may be invoked multiple times with different CrawlDataIterators until close()
has been called.
dataIt
- The CrawlDataIterator that provides the data to index.mimefilter
- A regular expression that is used as a filter on the mimetypes to include in the index.blacklist
- If true then the mimefilter
is used as a blacklist for mimetypes. If false then the
mimefilter
is treated as a whitelist.defaultOrigin
- If an item is missing an origin, this default value will be assigned to it. Can be null if
no default origin value should be assigned.verbose
- If true then progress information will be sent to System.out.IOException
- If an error occurs writing the index.public long writeToIndex(CrawlDataIterator dataIt, String mimefilter, boolean blacklist, String defaultOrigin, boolean verbose, boolean skipDuplicates) throws IOException
CrawlDataIterator
to this index.
This method may be invoked multiple times with different CrawlDataIterators until close()
has been called.
dataIt
- The CrawlDataIterator that provides the data to index.mimefilter
- A regular expression that is used as a filter on the mimetypes to include in the index.blacklist
- If true then the mimefilter
is used as a blacklist for mimetypes. If false then the
mimefilter
is treated as a whitelist.defaultOrigin
- If an item is missing an origin, this default value will be assigned to it. Can be null if
no default origin value should be assigned.verbose
- If true then progress information will be sent to System.out.skipDuplicates
- Do not add URLs that are marked as duplicates to the indexIOException
- If an error occurs writing the index.public void close() throws IOException
IOException
- If an error occurs while closing the index.public static String stripURL(String url)
Example: http://www.bok.hi.is/?lang=ice
would become http://bok.hi.is
url
- The url to stripCopyright © 2005–2015 The Royal Danish Library, the Danish State and University Library, the National Library of France and the Austrian National Library.. All rights reserved.