Relativity - Processing Console
Relativity - Processing Console
Console Guide
May 2, 2025 | Version 24.0.375.2
For the most recent version of this document, visit our documentation website.
Table of Contents
1 Relativity Processing Console 5
1.1 RPC features 5
1.2 Running an RPC job 5
1.3 Special considerations 7
1.4 Logging in to the RPC 8
2 Installing the RPC 11
2.1 Licensing 11
2.2 Distribution 11
2.3 Pre-installation requirements 11
2.4 Installing the RPC 13
2.5 Validating the RPC installation 15
2.6 Repairing or uninstalling the RPC 15
2.7 Upgrading the RPC 16
3 Importing data 17
3.1 Creating a data store 17
3.1.1 Working with Relativity-generated data stores 18
3.2 Importing data 18
3.2.1 Import jobs 18
3.3 Troubleshooting import errors 23
3.4 Generating job error reports 23
4 Extracting text 25
4.1 Extracting text 25
4.2 Inspecting the extracted text 27
4.3 Manually re-running single documents with errors 30
4.4 Using filters to resolve errors 30
5 Indexing data 34
5.1 Running an indexing job 34
5.2 Language detection 35
5.3 Merging subindexes 36
6 Generating images 38
n Granular worker and job control – prioritize worker servers into work groups, which you can alloc-
ate to work on specific jobs or processing functions.
n Selective import of data – choose specific folders and files to import for a given custodian, import or
exclude specific files by extension, De-NIST documents.
n Extract text - able to extract text directly from documents, images, or files with embedded images
OCR’d automatically.
n Indexing options - uses the dtSearch as its full-text search engine, automatically indexes all
metadata along with extracted text to ensure nothing is overlooked.
n Filter options - multiple filtering options, including date/time, de-dupe with extremely flexible options,
cross reference, expression, and full text.
n Reporting – create several reports, including de-dupe filter, error, summary, search filter.
n Generate images – render documents as PDFs without the need for print drivers, significantly
improving performance. Imaging jobs are also multi-threaded, and you can monitor worker activities.
n QC Functionality – preview all images and extracted text - or a random sample - track progress,
auto-advance images, and flag them for further review or replacement.
n Full metadata access – all extracted metadata fields are identified and stored in individual fields for
potential use.
n Robust export options:
o Numerous filtering options, including date/time, cross reference, expression, and full text.
o Customize and brand/endorse the images for export.
o Select metadata fields.
o Format fields – for example, specify date or date/time for date fields.
o Variable substitution – for example, specifying the return of a variable in a specific field.
o Build switch statements – for example, similar to a v-lookup, "when x is found return y".
o Track status.
Enter your password and click Login. If you use RSA authentication to log in to the RPC, make sure that the
server on which the RPC is installed is configured for RSA, or you won't be able to log in.
2.1 Licensing
You must have a Processing license and be running Server 2024 or above in order to use the Relativity
Processing Console. See the Licensing guide.
2.2 Distribution
For RPC distribution, contact Customer Support.
Web UI Done
You’re running Relativity 8.2 or above.
Processing is installed to at least one workspace.
You have a valid Processing license.
The RPC version matches the Invariant version.
At least one Worker is running and is designated for processing work.
The version of Processing installed via the Servers tab is correct.
The version of the RPC Installation package corresponds to the Relativity version.
IE Enhanced Security Configuration is off.
Authentication process is now mandate client registration.
Use of a local admin user account, and running RPC with Run as administrator.
n bulkadmin
n dbcreator
n public
All pre-existing and newly created processing server databases (Invariant and Store databases)
have RPC users mapped to the SQL Server logins with the db_owner permission set.
1. Log in to the machine with the local admin account where you want to install the RPC.
2. Open the installer executable to launch the setup wizard.
3. Click Next on the Welcome screen.
n Invariant instance database server name with the optional port number
n Relativity instance database server name with the optional port number
n Sql username must be EDDSDBO
n Sql password
1. Log in to the machine with the local admin account where the RPC and the Queue Manager are
installed.
2. Open the installer executable to launch the setup wizard.
3. Click Next on the Maintenance screen.
n Repair - click this button to repair your RPC installation. The installer adds a fresh copy of any
deleted RPC files, but it doesn't make any configuration changes or replace other files that
already exist. On the Ready to Repair the RPC window, click Repair.
n Remove - click this button to uninstall all RPC components from your machine. On the Ready
to Remove the RPC window, click Remove.
Any databases that you create in Relativity also appear in the RPC. See Working with Relativity-generated
data stores on the next page.
Note: The RPC is designed to work in distributed environment across multiple SQL, file, and worker
servers. You may see multiple SQL Servers listed in the Data Store window.
Note: You can delete groups but not data stores. You can hide a data store by right-clicking the data store
and select Hidden.
Once you've created a data store, you can import data. See Importing data on the next page.
1. Identifies the file type, and calculates the hash for the file so that it can be copied to the network.
These hashes are later used for deduplication filters. See Filter types on page 51.
2. Calls the specific handler for the file type, and passes in the required parameters. Each file type has
its own handler (or plugin) that requires its own set of parameters.
3. Adds job to the queue so worker can execute the method for opening the file (such as a PST), count
the items in it, and add 100 item groups to the job queue.
4. Extracts first group of, for example, 100 items from a container file and adds them to the queue as a
job. Continues with the second group of 100 items, and so on (Multiple workers may be running these
jobs.)
n You may see something different than a 100 items in your queue for a container file. This is
because for large container files, the RPC breaks the job into a number of smaller sub jobs,
and this number is not fixed. It varies, as it depends on the original size of the container file.
5. Adds jobs for attachments to the queue, jobs for metadata extraction, and so on. This process con-
tinues until there are no remaining sub tasks to add to the queue.
The RPC extracts only metadata from the files during an import job. See Extracting text on page 25 for more
information.
Note: Prioritizing custodians for de duplication is not supported in the RPC. Custodian order is
determined during ingestion. These settings can be changed at any time including during the import
process.
n Data Store - the selected data store displays by default. Select another data store from the
drop-down list.
n (Optional) Project and Custodian - enter names for these fields. These names display in the
Data Store window.
o Project - the project that you want to specify for the data import job.
o Custodian - the custodian that you want specify in the project.
Note: You can import data without defining a project or custodian. You can change
these settings at any time, including during the import process.
n Workstation - select the machine with the data that you want to process. When you select a
workstation, the List box displays all the drives and CD-ROMS available on that computer, so
that you can make them visible to other worker machines. This option is useful if the data
resides on a CD-ROM, USB drive, or other physical media that is associated with a specific
worker machine.
Note: It is possible to add a SQL setting if you have a fixed import location that you would
like to see as a location to import data from. In the Invariant database add an entry to the
AppSettings table with a category of ‘MapVolume’ and enter the UNC path you would like to
import from in the Value2 column. The other columns should stay as NULL. Multiple
MapVolume entries can be added to the table.
5. Choose the option Include selected path in virtual path if you want the virtual path to start at the
server name rather than lower in the path hierarchy. If you don't select this option, the path starts with
the folder name.
6. Click the Additional Settings button.
n Include: +mpp;+msp
n Exclude: -mpp; -msp
Note: You can choose to either include or exclude items on an import. You cannot
have both in the same import.
Only filter Applies filter only to top-level (or parent) and loose documents. Since this filters only
parent top-level documents, all the associated attachments will be returned.
documents
E-Mail
Enable Debug or troubleshoot a job that crashed, or other minor performance issues. This
logging log file is created on the C: drive of worker machine processing the file, and log file
names use the storage ID of the worker. The file contains subject names, entry IDs,
and other information.
Ignore Ignores errors generated when a cyclic redundancy check is performed on PST files.
PST/OST
CRC errors
Note: Time zone is applicable to OCR/imaging, text extraction, exporting, and publishing, but not
importing. By default, the RPC stores date type metadata in the database in Coordinated Universal
Time (UTC). While you can optionally set the time zone on the Job Settings General tab when
importing, this information is required prior to running text extraction, imaging or exporting.
n De-Nisted Files - when the RPC discovers a file, it queries the NIST database for a match. If the file
is found, the RPC doesn't import the file, but adds an entry for it to the De-NIST table associated with
the job. This table can later be referenced to obtain a list of de-NISTed files.
n Bad container files - these files include containers without children, as well as password protected,
incomplete, or corrupt containers that documents could not be extracted from. Common container file
types include ZIP, RAR, NSF, and PST. You can use this information to troubleshoot these container
files.
n File Details - the RPC captures detailed information about jobs in standard summary reports. See
Running standard reports on page 108.
3. Click a report type. The RPC generates the report and displays it in a new tab.
The following reports are available for troubleshooting an import job:
n Error Report - lists each error that occurred with detailed exception information. You can also run
this report to troubleshoot extracted text errors and imaging errors.
n Summary - lists the frequency of each file type in the job, the total number of discovered documents
and counts of de-NISTed files. It also includes file sizes of all files, de-NISTed files and the size of
files remaining after de-NISTIng not counting containers. Additionally, there is a brief entry describing
each error encountered during import.
n Bad Containers - lists all container files that Invariant was unable to pull any documents from. If a
container throws an error on import but Invariant was able to get a single file from it, the container will
not appear on this report however it will appear on the error report.
Note: Each error listed in the summary report has a corresponding detailed version in the error report. If
you reprocess error files and they don't encounter new errors, they will still appear on the report when you
run that report again. If the reprocessed files encounter errors on the retry, these new errors are listed
along with the original errors. The bad container report is dynamic, which means if a container was
successfully re-imported, it doesn't appear on the bad container report when you run that report again.
See Running standard reports on page 108 for information about other reports.
1. In the Data Stores window, drill down to a completed import job. An import job is directly under a cus-
todian.
2. Right-click the import job >Extract Text.
n Time Zone - set time zone required by the data set. This information is especially important for
processing emails, as date and time are listed in messages and appointments. Obtain the cor-
rect time zone from the client.
Note: You should specify the time zone setting prior to running text extraction. If you change
the time zone after running text extraction, then you should run text extraction again. Also,
make sure to change the following settings prior to re-running: on the OCR/Image tab in the
General Options section, check Overwrite intermediate files and un-check Preserve
existing pages.
n Workgroup - select a group. If you want to start the job manually, leave this field set to Group
0.
Note: We recommend not using filters on first-time extracted text. Instead, use filters to retry
errors on subsequent text extraction jobs. See Using filters to resolve errors on page 30. If
you intend to create a full text search filter on your data, the text will need to be indexed prior
to filter creation. Any items filtered out of text extraction will throw an error during the
indexing stage.
4. (Optional) Select the OCR/Image tab if you want to set options for documents that must undergo
OCR before the text can be extracted. See OCR/Image tab settings in Generating images on
page 38.
5. Click Start. The job is moved into the queue, which displays in the Job Activity window.
6. Monitor the progress of the job in the Job Activity window. See Managing workers and jobs on
page 88.
Note: If your workgroup is set to Group 0, highlight the job and select another group to begin processing.
If you have workers assigned to Group 0, this is not necessary, and the first available worker will begin the
job.
Once the text extraction job completes, you can index the data. See Indexing data on page 34.
3. From the Matter Inspector dialog box, click Refresh Needed to display the documents in the text
extraction job. If you did not apply a filter, all the documents display.
4. In the Matter Inspector window, scroll to the right to display Message, Extracted Text File Name, and
Intermediate file columns. You can sort each column in ascending or descending order by clicking on
the column header. Review the following columns and their content for troubleshooting information:
1. Right-click an error in the Message column, and select Extract Text. A new extraction job creates.
2. In the Job Activity window, highlight the job and select a Group to begin processing. See Managing
workers and jobs on page 88.
3. After the job processes, repeat the steps in Inspecting the extracted text on page 27.
Note: This expression returns any records where the Message column is not blank and the file is
not flagged as unprocessable. This means that any files from which the text was extracted
successfully are not returned. You must capitalize Message and Unprocessable, and use two sets
of double quotation marks in the expression.
12. In the Job Activity window, highlight the job and select a Group to begin processing. See Managing
workers and jobs on page 88.
13. After the job processes, repeat the steps in Inspecting the extracted text on page 27. All messages
should be cleared.
Note: You can index data at any point after text extraction or image generation. If you choose to skip text
extraction and proceed directly to image generation the indexing process will combine the page level text
into a doc level text file that will be used in the index.
1. From the Data Stores window, drill down to the import job that you used for the extract text job.
2. Right-click the import job > Indexing > Build Search Index.
3. In the Job Activity window, highlight the indexing job and select a Group to begin processing. See
Managing workers and jobs on page 88.
4. After the job processes, right-click the indexing job in the Data Store window > Discovery > Error
Report.
5. Review the report to determine if the index needs to be rebuilt.
4. Right-click on the import job, and click Inspect to open the Matter Inspector.
5. To view the language map, highlight the document in the Matter Inspector, and click Properties.
6. In the Properties window, expand Storage Metadata, and then expand Language Map to display a
list of language ranges.
Note: For a complete list the two-digit codes that accompany each full language name, visit the ISO
standards site.
Note: The RPC can search across multiple indexes. It automatically detects them and spans the search
across them, but this process is slower than using a single index.
Note: The option Verify and Merge Subindexes verifies that all expected documents are
indexed, and then attempts to merge the subindexes.
.
3. Enter a value and click OK.
Note: You can index data at any point after text extraction, and you can image documents immediately
after import. You can also index after image generation. The system automatically combines page-level
extracted text into a document level extracted text and index that text.
The intermediate files are stored in the Intermediate Folder created for a job. This folder may contain
intermediate files using the following naming conventions:
Note: To locate an Intermediate folder for a job, right-click a job in the Data Stores window > Inspect.
Right-click a file in the Matter Inspector > Open > Explore Intermediate Folder. Use the same
procedure to select Explore Native Folder.
Note: On the Document tab, the checkboxes have on, off, and indeterminate states. When the
checkbox is in an indeterminate state, the document’s internal setting for an option is used by
default. When the option is turned on or off in the RPC, this new value overrides the default setting
in the document.
Gridlines Select this option to render the gridlines between columns and rows in a
spreadsheet.
Auto-fit rows Use these options to expand the dimensions of rows and columns to accommodate
and Auto-fit their content. When these options are selected, any hidden columns or rows are
columns displayed. They are selected by default.
Note: You can prevent the RPC from displaying hidden columns and rows by
setting the DontUnhide option on the Advanced tab to True.
Remove all Select this option to remove background color. It ensures that any hidden text or
background rows formatted to match the background color are displayed.
fill colors
Set text color Select this option to display the font color of text as black. It ensures that any hidden
to black text is displayed, such as text with a white font on a white background.
Clear empty Use these options to remove empty rows and columns from a spreadsheet, and
rows and render as few pages as possible. By default, these options are selected.
Note: Select this option to speed up processing when you are imaging
spreadsheets created in Excel 2007, especially if they contain charts or graphs.
Note: In general, this option is not enabled, because the RPC can preserve the
actual value for a field code by preventing modifications to it when the document
is opened. For example, field codes for file path and date will not be automatically
updated with the current user settings, but will retain the original settings.
Show hidden Select this option to render any text added to the document with Hidden feature in
text Word.
PowerPoint
Show Renders the slide at the top of the page, and speaker's notes at the bottom. By
speaker default, this option is selected, and the image orientation is portrait even when the
notes document does not contain speaker notes.
When this checkbox is not selected, the image orientation for the PowerPoint slides
is landscape.
HTML
Render with Select this option to render HTML in Word. By default, the RPC will render HTML
Word documents in Internet Explorer, and then generate a PDF. For some problematic
HTML documents, a better image will generated if the file is rendered in Word.
Remove nbsp Select this option to remove long rows of non-breaking spaces (nbsp) codes, which
codes prevent the text from wrapping properly. When it renders HTML as a PDF, the RPC
will automatically format page breaks without cutting text or margins.
Option Description
General Options
Overwrite Use this option to generate a new intermediate file. The RPC does not store
intermediate multiple copies of an intermediate file for a document. When an intermediate file
files already exists, the RPC will use it to generate images unless this option is selected.
By default, this option is not selected.
Preserve Use this option to prevent existing PDFs from being overwritten. The RPC will skip
existing existing pages when they do not need to be regenerated. By default, this option is
pages selected.
Discard Use this option to ignore redacted pages during processing, and select intermediate
redacted PDFs (named as storageID.PDF). By default, this option is not selected.
pages
You can use this option when redacted PDFs (storageID_R.PDF) have been added
to the Intermediate Folder for a job. Instead of preferring these files, the RPC will
process unredacted files (storageID.PDF).
Image Generation
Format Use this option to change the image format. For most processing, use Default,
which is CCITT v4 for generating black and white TIFs. This setting is the setting at
installation but it can be modified at the project or data store level as well.
Select another format if you want to force a specific format for images. Options
include Color JPG, and for TIFs, CCITT v3 and CCITT v4.
n The RPC doesn’t support adding slipsheets in front of each exported file with
dynamic metadata fields on them. However, it is possible to use blank place-
holders and apply an endorsement on them of that file’s metadata through
the use of switch statements. For more information, see Using a switch state-
ment for custom logic on page 84.
Max pages Use this option to specify the maximum number of pages imaged for a document.
per doc (Set this value if you do not want the entire document imaged.)
Generate Select this option to not create and page level text when imaging.
PDF only
Searchable Select this option to perform in-place OCR on the PDF page image elements.
PDF
Do not show Select this option to generate blank placeholders. By default, placeholders display
error an error message.
messages on
placeholders
Render color Select this option to substitute JPGs for TIFFs when the pages are in color. (TIFFS
pages to JPG are generated only in black and white.)
OCR
Engine Use this option to choose the Nuance or no OCR engine. Select NoOCR when you
want extracted text but don’t want to OCR pages with images. The default engine is
Nuance.
OCR Type Use this option to control the performance of OCR job. Select Accurate for more
precise OCR, Fast for improved performance, and Balanced to equalize precision
and performance.
Preserve text Use this option to maintain the current layout of the text when extracting text from
layout PDF for OCR. By default, this option is selected.
Layout text in Use this option to maintain the order of text as in the PDF layout when extracting
stream order text for OCR.
Show OCR Use this option to display a separator between extracted text at the top of a page
text separator and text derived from OCR at the bottom of the page. The separator reads as, “---
OCR From Images ---“. With the separator set to off, the OCR will still be on the
page beneath the extracted text but there will be nothing to indicate where one
begins and the other ends. By default, this option is selected.
Allow OCR Clear this option if you do not want documents that do not have extractable
during data electronic text to undergo OCR. By default, this option is selected.
extraction
Exclude line Select this option to remove line art and annotations added to the original page.
art during (For example, it will remove any text boxes drawn over the original content of the
OCR page.)
OCR images Use this option to set the DPI level the Nuance engine will use when performing
DPI box OCR. Changing this can often allow the OCR engine to successfully OCR if it failed
using the default setting.
5. Click Start. The job is moved into the queue, which displays in the Job Activity window.
6. Monitor the progress of the job in the Job Activity window. See Managing workers and jobs on
page 88.
Note: If your workgroup is set to Group 0, highlight the job and select another group to begin
processing. If you have workers assigned to Group 0, this is not necessary, and the first available
worker will begin the job.
Once you've generated images, you can QC images. See Performing Quality Control tasks on page 99.
Note: For more information on how to execute filters in the RPC, see the Filtering scenarios in the RPC
video webinar on the Relativity Training site.
1. In the Data Stores window, highlight a data store that you want to filter, and click the Filters tab at the
bottom of the store list. If the Filters tab isn't visible here, right-click on the data store name and select
Filter(s)... The Filters tab won't be visible if you previously had it open but then closed out of it.
3. Select a filter type, and enter a name in the New Filter box and press Tab. We recommend clicking
Tab after entering the filter name, as this will ensure that the name is saved. Under some cir-
cumstances, not clicking Tab lists the filter merely by its default name of "New Filter (<Type>)."
4. Enter the required information for the filter type that you selected. You will see specific fields for each
filter type displayed in the center of the tab. See Filter types on page 51.
5. Select one or both of the following options to determine additional actions when the filter flags an
item:
n Flag the parent if it is a child item - if the filter flags a child item, then the parent will also be
flagged.
n Flag all children of flagged parent - if the filter flags the parent, then the children will be
flagged. Select this option if you only want to search for parent documents, but you also want to
include the children of the documents that had a hit.
Note: When you see a list of import jobs in the filter window after selecting a filter type, those jobs are
sorted by Job ID by default, but you can sort on any column (Custodian, Project, Job ID, CreateOn, etc.)
you want to.
Once you've filtered data, you can create export files for review or production. See Exporting data on
page 66.
2. From the Matter Inspector, click , and select a filter in the Filter to apply box.
5. To run the filter, continue with Step 5 in Creating a filter on page 48.
Note: The FileID is the only required field. Other fields are optional.
You must follow the required format for the cross reference file. The first row in the file must be a header
row, which is not imported when you run the filter. You can also define up to five metadata fields in the cross
reference text file:
n EXTRA1
n EXTRA2
n USERDEFINED1
n USERDEFINED2
n USERDEFINED3
Use the format in the sample file illustrated below:
Note: When running a date filter, make sure to restrict that filter to specific metadata or else it will
return hits on any date that meets the parameters you entered. This may include parameters that
you don't care about when executing a date filter, such as Last Printed On. In this way, not
restricting to specific metadata often results in an over-inclusive document list and costs you time
during your QC process.
n Dedupe the custodian against self - you can de-dupe the data set of a custodian against itself. For
example, you could de-dupe the laptop against the desktop of a custodian.
n Dedupe one custodian against another - you can select one custodian whose document set may
contain duplicates, and then another custodian whose data set determines the documents to be
removed. For example, you may want to dedupe the messages in John’s mailbox against those in
Jane’s. See Targeted deduplication filter scenario on page 57.
n Dedupe against multiple custodians - you can dedupe one custodian against multiple custodians.
The RPC uses a weighting algorithm to determine which documents to dedupe, when the same cus-
todian is in both groups.
Note: The Deduplication filter does not delete any documents. It simply does not return them in the list of
documents.
5. Select filtering options for E-mails. You can select a combination of different hashes that the RPC will
use to identify duplicate files.
n Processing generates four different hashes for emails and keeps each hash value separate,
which allows users to de-duplicate in processing based on individual hashes and not an all-
inclusive hash string. For example, if you’re using processing, you have the ability to de-duplic-
ate one custodian’s files against those of another custodian based only on the body hash and
not the attachment or recipient hashes.
o Body hash - takes the text of the body of the e-mail and generates a hash.
o Header hash - takes the message time, subject, author’s name and e-mail, and gen-
erates a hash.
o Recipient hash - takes the recipient's name and emails and generates a hash.
o Attachment hash - takes each SHA256 hash of each attachment and hashes the
SHA256 hashes together.
4. Enter a code for a simple filter in the Expression box. If you want to use C# syntax, click Advanced
Expression to auto-generate an outline of a method, and then enter your code.
5. To run the expression on a limited set of data, select a filter in the Sub-Filter box.
6. To run the filter, continue with Step 5 in Creating a filter on page 48.
The following sections provide examples of expression filters. Note that none of these examples requires
advanced mode.
7.3.4.1 Example: Find documents with no error message and aren't flagged as unprocessable
Use the following expression to isolate documents that have an error message but are not flagged as
unprocessable. This filter is useful for situations in which you've run text extraction and want to rerun
anything that threw an error.
7.3.4.5 Example: Find documents whose names start with a particular string
Use the following expression to isolate documents with names that start with a particular string.
7.3.4.6 Example: Find documents whose names end with a particular string
Use the following expression to isolate documents with names that end with a particular string. This is useful
for finding files with a specific literal extension instead of the identified extension.
Note: You can run the Comprehensive Hits Report to display a list of statistics about the number of
documents that match a search term and other data. See Running standard reports on page 108.
4. Either click next to the Search For window to browse for a file containing your search terms, or
enter then in the following format: "relativity","processing"
1. Open the matter inspector for the job that contains the errors.
2. Select the File Ids of all the files that you want to retry.
3. Right-click and select Copy Selected Cells.
6. In the Generate Images or Data Extract Job window, select the filter you just created from the Filter to
apply drop-down list and select As an Include filter from the Add drop-down list.
Note: For more information on how to export through the RPC, see the Using the RPC Exporter video
webinar on the Relativity Training site.
2. When the Export Wizard is opened either no import jobs are checked (if opened by clicking on the
data store) or a single import job is checked (if opened by clicking on a specific import. Regardless of
which option was chosen, you can check or un-check import jobs as needed for your export. If you
want to use a saved export file (.EXF) from a previous job, click Load Export to select it before you
pick which jobs to export. There are no adverse effects of selecting the jobs first, but loading an
export un-checks all import jobs automatically and you’ll need to reselect the jobs again.
4. If needed, click Options to display the Export Options dialog box. None of these are required to run
an export
Option Description
As an Includes any document that matches the criteria set in this filter. For example, you
Include filter could add a keyword search as an include filter, so that all documents with this
keyword are exported.
Note: Select the checkbox under Use Metadata to export keyword, and its
frequency in a document. This functionality is available because metadata is
actually stored in filters. New columns with the metadata are added.
As an Excludes any document that matches the criteria set in this filter.
Exclude filter
As a Dedupe Same as an Exclude filter, except that the RPC uses a different logging code for
filter deduplication. You can run a report on this code to identify files excluded due to
(Exclude) deduplication.
As a References the criteria in the filter and may perform an action based on a match. For
Reference example, tagging information obtained from Relativity could be used to apply tags to
Only documents that match the metadata in the filter criteria. All documents are exported
and tagged as required.
As a Similar to a Reference Only filter. The Potentially Privileged column is updated to Y
Privilege for any document that matches the filter criteria. All documents are exported.
Filter
(Reference)
n When multiple include/exclude filters are used, the logic used to combine them is an AND state-
ment only. Reference, Privilege and Placeholder filters do nothing to limit any documents from
being exported.
n If any filters were added incorrectly, highlight them and click Remove. The Mask non-relevant
filters checkbox will hide any filters that do not apply to the import jobs selected for export. This
button also can be used to refresh the filters in the drop-down list if you created new ones while
the export wizard was open.
Note: You can't modify filters used in an export that uses a setting of New, Replacement or
Supplemental. If you need to alter a locked filter, you must first clone the filter and then edit
the clone.
n OtherCustodians - export the DeDuped Custodian and Deduped Path information via Relativ-
ity (the front end). You can't export either of these fields if you don't choose this option. As a res-
ult, standard practice is to export all other metadata using one of the other dropdown settings
and to export these two fields to a separate overlay metadata file using this option.
n Untracked - no tracking information for the export is added to the database. You may want to
select this option if you are doing some experimental export jobs that you do not want tracked
in the database. If you are building a new export file definition, you may run several test exports
to see how the data is displayed with the current settings.
11. Click Next to display Select Tasks window.
12. Click New. Point to Page Tasks or Document Tasks, and click on one or more export tasks. Each of
these options will have a variety of task parameters that can be set. For details, see Updating task
parameters on page 73.
n Page Tasks - available options are Copy Images and Copy Text.
o Copy Images - writes TIF and/or JPG image files to a desired location. An image gen-
eration job must have been performed first. The exporter will throw an error for each doc-
ument exported that was not previously imaged.
o Copy Text - writes page level text files to a desired location. An image generation job
must have been performed first. The exporter will throw an error for each document
exported that was not previously imaged.
n Document Tasks - available options are Copy Native files, Copy PDF Files, Write cross-ref-
erence file, Write extracted text, Write metadata, and Write Summation metadata.
Note: You can add the same task multiple times to an export job. Depending on the
task, you can assign a different file name or folder location to each copy of the task.
For example, you could export a set of metadata for a client and another for opposing
counsel by adding the Write metadata task twice. You would then modify the name of
the task added under Document Tasks, and the name of file output for each task. If
the export job needs to be repeated, you do not need to redo each of the document or
page tasks. You can clear the checkboxes for the tasks that do not need to be redone,
and then perform the export. For example, you may want to add new fields to the
metadata file, so you can select only Write metadata under Document Tasks.
13. Click Next and then Finish to run the export job. The Finished box contains any errors that occurred
along with the File ID for troubleshooting purposes. The window outlined in red below will contain any
14. (Optional) Click Save Export to save your current settings as a reusable export file (.EXF). You can
select this load file for use with another export job by clicking Load Export. You can save your export
at any point during the export process, you do not have to actually export anything before saving. If
you are creating a complicated export, you may want to consider saving your export periodically as
you add tasks.
Associated Universal
Parameter Notes
Tasks or Specific
Bates Number All Universal Also serves as control number for document level
exports. Should be left as {BatesBeginDoc},
expand this section to format the bates/control
number.
DocLevel All Universal If True, numbers are incremented at the doc level.
Ignore Number All Universal If True, the value in the StartAt field will be
disregarded and only the prefix and suffix will be
* Semi-universal items only apply to other tasks of the same type. For example, if you made endorsement
settings to one image export task, those settings would also apply to any other image export task but they
won’t apply to endorsements on a Copy PDF task.
Note: You can rename any of the tasks if desired, just left click once on the name to highlight and a
second time to edit it. For instance, if you are exporting two different metadata files, you may want to
name one ‘Opposing Metadata’ and the other ‘Our Metadata.'
The field editor is a powerful tool that provides access to all metadata captured so far in the instance, as well
as numerous ways to manipulate that data.
The upper pane displays what will be returned for a given parameter. In the view above you see the default
fields for Metadata Content. The top line represents the header and the second line the fields of metadata.
The Show Fields button opens the lower pane which provides access to the various fields of metadata. The
presets option lets you quickly change between three common delimiter settings but you aren’t limited to
these.
You can enter any values you want in the Quote and Separator fields to the left. To add a field of metadata,
find what metadata you want to include in your export in the lower pane and do one of three things:
n Ctrl-Left Click will add both the field of metadata to the second line as well as adding the field name to
the header.
n Left Click only will only add the metadata without adding anything to the header.
n Shift-Click will add the name of the field but not the actual metadata.
n Save this field - used to save a customized field of metadata or switch statement for use in other
exports. Saving the field also allows the field to be mapped through the Relativity front end.
n Apply Formatting > Child Values - the exporter will return the corresponding metadata values of a
given document’s children instead of its own. For child documents it will return its own value only.
n Apply Formatting > Parent Values - the exporter will return the corresponding metadata value of a
given document’s parent instead of its own. For parent documents it will return its own value only.
n Remove Formatting - removes any applied formatting.
n File Counter - enter a customized counter that increments with every document. Double clicking on
this allows you to customize the format of the number returned. This is typically used for creating sub-
folder names for native file and extracted text exports.
n Generic Counter - similar to the File Counter but can used to increment on any field of metadata
rather than just with every document.
n Control Flow > Switch Statement - see below for information on creating a Switch Statement.
n Allow Empty Dates - True = Invalid or missing dates will return null (empty). False = Invalid or miss-
ing dates will return 1/1/1900.
n Date Time Format - you have the full Microsoft custom date and time formatting options available.
n Replacement - if the designated separator character exists in the metadata, it will be replaced with
this character.
n Separator - the character to be used between each value being exported.
Note: Unlike the preset field separator values, there is no function to change all multi-value separators. If
you are changing them from the default | character, be sure to change them on all multi-value fields.
1. Under Document Tasks, highlight Copy PDF files to display a list of parameters that you can set for
this task.
Note: The Distributed parameter is set to True by default in the previous illustration. This export job
will queue up a task to the workers. The workers can then do the branding, imaging, generating
PDFs and other tasks. You do need to ensure that the destination location is visible to these worker
machines. When this parameter is set to False, the job will run only on the machine that you are
using, but it will be multithreaded. You might disable this parameter if you were exporting to a USB
drive that other machines cannot see, or if you were debugging.
3. Under Bates Number, highlight Prefix. Enter text or a metadata field from the document, job set-
tings, custodian information, or other source. In the following illustration, a variable substitution
(requiring curly braces) for custodian metadata has been added to the Prefix field.
You can also edit this field by clicking the Browse button. In the Field Editor dialog box, click Show
Fields to display a list of available metadata. Expand a metadata group, and double-click on a field to
add it to list box.
If you want to truncate the string, click String Truncate to display the Field Properties pop-up where
you can define the maximum length.
Change the default value and close the pop-up to display the Field Editor with the updated formatting
information.
n If the Custodian contains Maude return ML_ and close the switch statement.
n If the Custodian contains The Dude return TD_ and close the switch statement.
n Else return XX_ and close the switch statement.
n You can have as many conditions as you want in your switch statement but remember that they are
checked in order and the switch statement terminates on the first true condition. Because of this you
must make certain that your logic for the conditions accurately covers all possibilities. Let’s say we
have two custodians named John Smith and Sara Smith. Our first condition checks if the custodian
n Name - the system will automatically give a new switch statement a name consisting of a random
string of letters and numbers. You can change this to fit your needs however the name of the switch
must not match the name of an existing field of metadata. A simple way to avoid accidentally doing
this is to precede the name of your switch with an underscore (_).
n Cases - the various checks that will be made against the entry in the switch field.
n Switch - the field of metadata or fixed value that the collection of cases is compared to.
n Default - the value to return if none of the collection of cases is true.
The Switch Comparer Collection Editor contains the following settings:
1. Under Document Tasks, highlight Copy PDF files to display a list of parameters that you can set for
this task.
2. Expand Bates Number, and highlight Prefix. Click the Browse button to display the Field Editor.
12. Highlight the switch, and click Edit. Select Save this Field from the menu.
13. In the Save Field As dialog box, enter a name for the switch. This field is now added to the Recently_
Used list under Show Fields, and be reused as necessary.
1. Under Document Tasks, highlight Write metadata to display a list of parameters that you can set for
this task.
2. Highlight Working Folder. Enter a folder path or click the Browse button to select one.
Consider the following when defining a Working Folder path:
n This is a shared field among the Document and Page tasks. For example, when you use this
field for one task, it populates for all tasks with the Working Folder field specified.
n You can't specify a separate folder for separate tasks.
n You can also use variables in the folder path.
n This example uses metadata as a variable, so that each custodian has an individual folder:
c:\Exports\{Custodian}
3. Highlight Metadata Content, and click the Browse button to select the metadata fields to include in
the file as described in the previous section.
4. Click Presets to select a delimiter for use in the file.
For example, select Concordance to separate the metadata fields as illustrated below:
1. Under Document Tasks, highlight Copy native files to display a list of parameters that you can set
for this task.
Note: If you're starting an import job of a large PST or similar container you may note that only a single
worker in the assigned group is doing any work for an extended period of time. This is because the file
must be copied to the repository and have hash values calculated for it. This is not a distributable task and
can take considerable time on very large files which is normal. Once these tasks are complete the other
workers will be able to work on importing messages from the PST simultaneously.
Since you can constantly monitor the queue, you can also dynamically reallocate workers from one
workgroup to another as the job load requires. You must stop a worker in order to reallocate it to another
workgroup. The worker will complete its current job, and then display its status as Stopped.
You also have the option to bring the worker offline immediately. In this case, the worker doesn't complete
the current job. When you restart the worker, it automatically performs a series of cleanup tasks based on
stored procedures in the database. It then cleans up any entries added to the Matter table and other places
in the system, as well as removes any sub-jobs that it added to the queue. The restarted worker resets and
re-executes any of its open jobs from the beginning.
While you can queue multiple jobs to the same workgroup, we don't recommend this method because of the
first-in-first-out design of the queue. This method can create contention for shared resources that may
degrade performance if small and large jobs intermix. We instead recommend utilizing multiple workgroups
with their own workers processing a single job.
1. In the View menu, click Workers Window. The Workers Activity window displays a list of available
workers and their current statuses.
3. In the Workgroup box, select a group. The worker is now assigned to this group, and will only pro-
cess the jobs that are added to it.
4. Click Run to start the worker. Its status will be updated to Running.
n
Stopping a worker - the worker stops receiving work from the queue, and it finishes only the tasks
it’s working on, not the entire root job. The remainder of that job is then available to be picked up by
another worker.
n
Taking a worker offline - the worker immediately stops and is brought offline, regardless of
whether or not it’s working on any tasks. After a short period of time the worker should bring itself
back online automatically. If you are concerned that a worker is hung up on a task it is recommended
to try stopping the worker first to allow as many of the tasks the worker has picked up to end grace-
fully. If the worker remains in a “stopping” state, then it may be necessary to bring the worker offline.
Note: In order to stop a worker and take a worker offline, the account running the RPC must be a local
administrator on the worker servers.
The panel to the right of the Worker Activity window provides the following information:
n Item - a list of tasks that the worker has performed since it was brought online.
n T (Threads) - a count of how many threads are currently working on the listed Item. It is recom-
mended to sort on this column so that items that are currently being worked on are always displayed
at the top of the list.
n Average - the average amount of time the worker has spent executing the items in that row since it
was last brought online (displayed in milliseconds).
n Max - the maximum amount of time the worker has spent executing an item in that row since it was
last brought online (displayed in milliseconds).
n Hit - the total number of times the worker has executed the item in that row since the worker was last
brought online.
Note the following details about the behavior of the Worker Activity window:
n If no worker is highlighted, the Tasks window displays the last worker selected.
n You aren't able to display tasks for all workers simultaneously. You can only display tasks for a single
worker.
n Status - reflects the current status of the worker. This displays one of the following values:
o Running - the worker thread is performing one of its designated jobs.
o Stopping - the worker has been instructed to stop and is attempting to complete and tasks that
it had already picked up. Once the worker has completed all tasks that it had picked up it will
change to a status of stopped. If the worker is unable to complete the tasks it has picked up the
worker will remain in a stopping state indefinitely and will need to be brought offline.
o Offline - the worker has been brought offline by the user or the InvariantWorker.exe program
was somehow forced to close. As long as the queue manager is running, the worker will auto-
matically try to bring itself back online even if it was deliberately taken offline by the user. While
the worker is offline only the Status, Name, ID, and Group columns will be populated.
o Stopped - the worker has been manually stopped. If the worker is either stopped of logged off,
the remaining columns will contain no data. A worker being brought online will briefly go into a
stopped state before going online.
o Logged Off - the worker is either powered off or the relativity service account is not currently
logged in as the user. While the worker is logged off only the Status, Name, ID, and Group
columns will be populated.
Note: The network on the Utility Server isn't set up to view the status of your workers;
therefore, you’ll see all workers logged off in the Worker Activity window in the RPC, and
you'll need to refer to the All Workers view of the Processing Administration tab tab in
Relativity to see the status of your workers. For details on the Processing Administration tab,
see the Processing User Guide.
1. If the Worker Activity window isn't already open, navigate to the View menu and select Workers Win-
dow.
4. In the Properties Window on the right, expand the Workstation row, and scroll down the Description
row. This is the name of your worker. Edit the value directly to the right in the Description row by click-
ing once inside the current name and modifying it.
Menu
Description
Options
Refresh Displays newly added workers.
Brings a worker back online after being taken offline if you don’t want to wait for the worker to
Online come back online on its own in a few seconds.
Remote Opens a remote desktop connection to the worker, in which you can bring a worker back
Desktop online and add to your worker machines in order to troubleshoot issues.
Remote Logs in to the worker as the RCA if the worker has been logged off.
Logon
Remote Logs the worker off, which closes any tasks that were opened while logged in. This is
sometimes a useful last resort before rebooting a worker if it has been working on many
Logoff problem files that have been causing the worker to hang.
Reboot Reboots the worker if it needs it. If a worker is unresponsive this may work to reboot it but
usually if the worker is in such state it won’t respond to an instruction to reboot either. If the
Worker worker is up and running fine though and you need to cycle the worker, this button will work.
Remote Brings up a text box in which you can enter a command as though you had clicked Start then
command Run on the worker machine. This feature should not be needed for typical day to day work.
Note: Making changes to the workgroup or priority, or starting, stopping or deleting a job can have
serious consequences, especially to jobs that originated from Processing.
1. In the View menu, click Jobs Window. The Jobs window displays a list of available workers and their
current statuses.
3. Click to start the job. The job's status is updated to Running. See Viewing worker activities on
page 90
Note: For more information on how to use the matter inspector, see the Using the RPC Matter Inspector
video webinar on the Relativity Training site.
To QC documents:
1. In the Data Stores window, drill down to the import job that you want to review.
2. Right-click the import job, and click Inspect.
3. From the Matter Inspector window, click Refresh Needed to display a list of documents in the grid
box.
2. If needed, in the Image Viewer, increase or decrease the length of the delay before the RPC moves to
the next image. Clicking the plus sign increases the delay and offers you more time to view the image
4. Perform QC on the images as they automatically appear, one at a time, in the Image Viewer.
n The RPC only flips through each image and presents them to you for your manual QC, it
doesn't perform the QC for you.
n A Placeholder is a document that the RPC couldn't image and thus automatically made a place-
holder for.
Note: You can't actually configure a placeholder in the RPC; you can only generate a blank
placeholder through the Generate Placeholder right-click option in the Matter Browser, or
you can upload a PDF file created outside of the RPC for the Custom placeholder PDF
option in the OCR/Image tab of the Job Settings window.
6. Note that the documents you marked as QC'd are green in the Matter Browser.
See Matter Inspector window on page 120 and Image Viewer window on page 123 for more QC options.
Alternatively, you can right-click on a file in the Matter Browser to display a pop-up menu. This menu
includes options for generating placeholders, displaying intermediate folders, and others.
1. In the Matter Browser, select Exclude the following and select Has Images. Doing this will include
only non-image files from the import in the viewer.
3. Open the Extracted Text or Page Text Viewer to perform QC on each file you select.
4. Once you've finished QCing a document, click the Finalize QC icon. Files for which you've finished
QC appear green in the Matter Browser.
1. In the Data Store window, highlight a data store that you want to report on, and click the Reports tab.
(You can also right-click on a data store, and select Reports from the menu.)
4. To run the report on a subset of documents, select filters in the Filter to apply drop-down. Click
Add, and select the option to include or exclude the data returned by the filter.
n As an Include filter - Includes any document that matches the criteria set in this filter. For
example, you could add a keyword search as an include filter, so that all documents with this
keyword are exported.
n As an Exclude filter - Excludes any document that matches the criteria set in this filter.
n As a Privilege filter - This has no function in running reports.
5. (Optional) Select Mask non-relevant filters to hide any non-relevant filters from the list. If you only
want to see the filters that apply to one particular project out of several that exist in the RPC, you can
select only that project in the Reports tab and check this option. Doing this blocks the filters that apply
to the other projects from displaying in the Filters to apply drop-down.
7. (Optional) Click Render Individually to view the report for each import job in a separate tab. When
you save the report with this option checked, individual files generate for each import job. The report
data for the selected import jobs is combined into one tab when you don't select this option. To render
reports individually, perform the following steps:
a. Select the report you'd like to run.
b. Select each import job you'd like to run an individual report for.
c. Check the box next to Render Individually.
d. Click View Report. Notice that a tab opens up for each import job you selected.
8. To create a report, perform one of the following tasks:
n Click Save Report to generate an Excel or Adobe PDF version of the report, and save it to dir-
ectory of your choice.
n Click View to display the report in the RPC.
There are additional reports available that you can run from the Data Stores window. See Right-click Data
Store options on page 114 for more information about these types of reports.
By default, only the Worker Activity, Job Activity, and Data Stores panes are open when you first access the
RPC.
Note: Open the various windows and position them to where you will always want them. Then close the
program without closing the individual windows first. If you do this, the desired window positions will be
stored to a configuration file so that each time you open the RPC, the windows will be where you want
them.
n Error report - gives a detailed list of errors on import job or any job performed on an import.
n Summary report - lists the frequency of each file type in the job, the total number of discovered doc-
uments and counts of de-NISTed files. It also includes file sizes of all files, de-NISTed files and the
n You can’t delete stores associated with active Relativity workspaces, meaning workspaces that
haven't been deleted.
n You can manually delete a store through the right-click menu as soon as the workspace is marked for
deletion by the Case Manager.
n If you don't manually delete a store and the workspace associate with it is deleted from Relativity, that
store will automatically be removed from the store list in the RPC.
n Comprehensive Hits - displays a list of search terms and the frequency that they occur in the doc-
ument set. You need to apply a search filter when you run this report.
Summary Reports
n Detailed Summary Report - contains data about the number and size of raw files, number of files
before and after De-NISTing, number of e-mail messages and their attachments, and other inform-
ation.
n File Types - lists the extensions of all the files in the data set, and the number of occurrences for
each one. It also includes summary totals.
n FileTypes and Size - lists the extensions of all the files in the data set, the number of occurrences for
each one, and file sizes. It also includes summary totals.
n OCR Page Count - lists the number of OCR'd pages.
n Page Count- lists the total page count, in addition to pages marked as deleted and pages marked as
delete candidates.
Error Reports
n Exception Report - displays the exception totals for Unprocessable files, Password protected files,
and Errored Files.
n Password Report - displays the exception totals for Password protected files and total file size
count.
Note: The Matter Inspector is not necessarily reflective of the complete document list that you would be
exporting. This is because the inspector isn’t concerned with document relationships, as it is merely a
place for you to inspect documents that hit on, for example, any keywords you specified in your report
settings. This means that if you don’t flag parents or children and you review the resulting documents in
the Matter Inspector, you may see a different number of results than you’d see in the comprehensive hit
report. Likewise, the number of documents that you export might be different than what you viewed in the
Matter Inspector because the exporter has to honor document relationships, and thus it will not export
child documents without their parents.
n Copy Selected Cells - copies the cells you’ve selected to the clipboard. There is no subsequent win-
dow or explicit indication that you’ve taken action on the cells you’ve selected, but they have been
copied once you select this option. This is the only right click option that can be used on more than a
single record. Everything below will only work on the document where the mouse is when the right
click was executed regardless of how many documents are highlighted.
n Regenerate Images/OCR - opens the Job Settings window, where you can switch to the Image or
OCR tab to reconfigure your OCR and image settings and perform an imaging job on the selected
document. Once you have made the desired settings click start and the document will be placed in
the job queue.
n Extract Text - queues the selected document for text extraction. It does not first bring up the Job Set-
tings window as the Regenerate Images/OCR option does. If you wish to make changes to the set-
tings before executing extracting text you will need to right click on the appropriate import job in the
Data Stores window and select ‘Settings’. If the selected document has already been text extracted
nothing will happen unless the ‘Overwrite intermediate files’ setting on the Text tab is checked.
n Generate Placeholder - creates a generate placeholder job in the queue for the selected document.
If you’ve already specified a file for the Custom placeholder PDF setting in the OCR/Image tab of the
Job Settings window, the placeholder that you generate through this right-click option will display as
that file. Thus, there is no option for you to override a custom placeholder PDF in the Matter Browser.
n Convert to Container - takes what is currently a document in the job and converts it to a container
file.
o Containers are not exported or published, and their children are promoted up a level. For
example, if a parent Word document with two child attachments is converted to a container file,
then the two child attachments would become parents instead of children. They would then dis-
play as coming from a Word document parent container.
o This option is useful when the RPC misidentifies a container file as a document, for example if
it misidentifies a ZIP file and treats it like a document. In this case, the option to convert the doc-
ument to a container allows you to correct the relationship.
o When you select this option, the record is no longer available for individual inspection, as you
aren't able to inspect containers.
o You also have the option to convert a container to a document, which does the reverse of con-
tainer conversion.
n Reports - this option currently provides no functionality.
n Not Started
n Picked Up
n Pending Worker
n Started
n Finished
n Error
n Interrupted
n Stopped
n Paused
n Waiting
n Canceling
n Canceled
Note: Making changes to the workgroup or priority, or starting, stopping or deleting a job can have
serious consequences, especially to jobs that originated from Relativity processing. For more information
on what will occur when using these features see Managing workers and jobs on page 88.
2. Specify settings for all applicable tabs in the Job Settings window.
4. Select whether to save your settings to a (Data) Store or a Project profile by clicking Store or Project.
If you haven't specified a project name, you'll only have the option to save the defaults to a store. You
aren't able to save the settings of the following areas as defaults:
n The General tab
n The File Handling section of the Import tab
n The E-Mail | Ignore PST/OST CRC errors in the Import tab
5. Note the confirmation screen, which states that your settings have been saved and are now the
defaults for the store or project.
n For workers, information about what types of work it the worker is set up to do is displayed, and you
can edit the name of the worker , along with setting the max threads the worker can utilize. For a
breakdown of the worker properties, see Viewing basic worker properties on page 94.
n For jobs, information about doc counts, error counts, Start/Finished/Last Progress times are dis-
played.
n The properties of a given data store are also available.
n The properties window can be especially useful when used in conjunction with the matter inspector.
Here, you can click a specific document and find all available metadata on that document. You can
also edit metadata here, even though this isn’t something that would be done in a typical workflow.
You can click and highlight the following items to display their properties:
n
The Property Pages and blank drop-down above these buttons are currently non-functional.
Processing
Relativity Field Is
RPC field name Description
field/source type Unicode?
name
Attachment ChildRelativityControlNumbers Long Yes Attachment document IDs
Document IDs Text of all child items in family
group, delimited by semi-
colon, only present on par-
ent items.
Attachment List ChildFileNames Long Yes Attachment file names of
Text all child items in a family
group, delimited by semi-
colon, only present on par-
ent items.
Author Author Fixed- Yes Original composer of doc-
Length ument or sender of email
Text (50) message.
BCC Address EmailBCCSmtp Long Yes The full SMTP value for
Text the email address entered
as a recipient of the Blind
Carbon Copy of an email
message.
CC Address EmailCCSmtp Long Yes The full SMTP value for
Text the email address entered
as a recipient of the Car-
bon Copy of an email mes-
sage.
Child MD5 Hash ChildMD5Hashes Long Yes Attachment MD5 Hash val-
Values Text ues of all child items in a
family group, delimited by
semicolon, only present
on parent items. The RPC
can't calculate this value if
you have FIPS (Federal
Information Processing
Standards cryptography)
enabled for the worker
manager server.
Child SHA1 ChildSHA1Hashes Long Yes Attachment SHA1 Hash