Reveal Help Center

Predictive Coding

To use Reveal AI, a project must be created and have Artficial Intelligence enabled. A case can have AI enabled in either the Reveal Review Manager or in the Company Admin > Projects details screen for each project. Enabling AI creates an analytics engine case in parallel with the Reveal Review project.

In a case where AI has been enabled an administrator can associate review tags with the positive and negative choices in the analytics engine. These choices will build the AI model and the corresponding relevance score will be synced with the Review platform.


Reveal requires at least one positive and one negative choice to enable prediction on mutually exclusive tags.

See. for more detail:

Assigning Tag Visibility of the AI Score

There are distinctive differences in how the Predictive Coding choices are associated with the various Tag Choices.

  • Mutually Exclusive - predictive associations are made at the choice level and the AI Model is created on the Tag level.

    • The corresponding tag appears in the Document Review screen with the AI Tag showing to indicate AI is enabled, with the positive and negative icons associated with their review choice.

  • Tree/Multi-Select - predictive association is made at the choice level and the AI Model is also created on the Choice level.


The corresponding tag appears in the Document Review screen with the AI Tag showing as enabled at the Choice level. After each AI-enabled choice is a number indicating the weighted probability of the issue being relevant to the current document.

Setting Up AI Enabled Tags

Tags are created and managed under Project Admin > Tags. See Create and Manage Tags for a full discussion of Tag configuration and maintenance.

  1. In the Tag Editor, choose the Add Tag and Choices button at the top of the left panel.

  2. Enter a unique name in the Tag Name field.

  3. Select the type of field: Multi-Select, Mutually Exclusive or Tree.

  4. Add Choice(s) to the Tag by selecting the Add new choice button. This makes an entry in the table with a grab handle at the left.

  5. Give the Choice a name.

  6. Optionally associate either Predictive AI (in Mutually Exclusive tags) or Prediction Enabled (for Tree or Multi-Select tag choices).

  7. Optionally associate Privileged with the choice.

  8. You may set Prediction AI access for selected users or teams to enable selected users to address the analytics engine directly.

  9. If you would like users to be able to add new choices dynamically during review, select the 'Updateable in the review screen' option.

  10. To enable tag propagation for related documents, select the relationship types to Auto update for this Tag: Family Members, Near Duplicates, Duplicates and/or Email Threads.

  11. Click Add.

See Create and Manage Tags for a complete discussion of preparing Tag choices and profiles.

Searching Using Predictive Scores

Since AI offers integers as analytical predictive scores for tagging, searches may be run in Reveal to assess probable responsive documents by these measures. 

There are two ways to search using predictive scores:

  1. Search on values in the AI Score metadata fields; or

  2. Under MORE in Refine Search use Tag Prediction Score to examine scores on selected tags using selected operators.

For both methods, click the More Options icon 603d3256afd72.png at the right of the Search bar in the Review Screen. This opens the Refine Search window.

  • Select Field from the shaded button bar in the Refine Search screen.

  • Select one of the AI Score fields, for example AI Score - Responsiveness.

  • In the selection window that opens, choose the operator from:

    • Equal to

    • Not equal to

    • Greater than

    • Greater than or equal to

    • Less than

    • Less than or equal to

    • Is null

    • Is not null

    • Range

  • Then enter the value for comparison, e.g., 80 for a reasonably high probability.

  • Click Save.

You may repeat the process and select other AI-enabled Tag fields to refine the search, or combine with other search criteria.

Tag Prediction Score

The method is similar, in effect drawing upon the same score data but in a different location.

  • Select More from the shaded button bar in the Refine Search screen.

  • Select Tag Prediction Score.

  • Choose the AI Enabled tag to be used from the dropdown list.

  • In the selection window that opens, choose the comparison operator from

    • Greater than

    • Greater than or equal to

    • Less than

    • Less than or equal to

    • Range

  • Then enter the value for comparison, e.g., 50 for an even possibility.

  • Click Save.

You may repeat the process and select other AI-enabled Tag fields to refine the search, or combine with other search criteria.

Viewing Predictive Scores as a Metadata Field

When a case has Artificial Intelligence enabled, scores and tags will be synced between the Review tool and the analytic engine at a near real-time pace. When an AI model is created, there is a corresponding field created by default in the Review tool. This field will be named with the Tag or Choice name, prefixed by AI Score - <Tag Choice> or Nexlp <Fieldname>. This field can be added to a field profile for use in searching and filtering.

Artificial intelligence enabled tag metadata fields:


Artificial intelligence analytic engine linked field metadata:

Tag AI Predictive Scores

When the analytic engine has a score to associate with a document, it will be stored in this integer field. The values can range from 0-100. These scores will be shown as parentheticals following a Mutually Exclusive Tag or a Multi-Select or Tree Tag Choice, and as integer values in the related AI Score fields.

Here are predictive AI scores for a document shown in the Tag pane:


Here are some of the comparable scores in metadata for this document - note the scores match for each labeled item:



Sometimes predictive AI scores will be negative numbers, which reflect the below error codes.

  • -100 - Unclassified: These are the documents that get a probability score of 50 from the COSMIC classifier. This means they are classified as neither positive nor negative as per the current round of COSMIC classification.

  • -200 - No Score – Errored Documents: An error has occurred during the 2nd pass of the processing for these documents.

  • -300 - No Score – Empty Documents: When a document is found to have no text and metadata representation in the form of vectors. This almost never happens for emails as there is metadata in the form of: From, Sent, Subject, etc. But this can happen for an attachment that has a unique filename.


    In order to reduce this number, tag more documents and train the model. As the model grows stronger it will reduce the number of empty documents by being able to recognize the metadata and text features in order to give the documents a proper score.

  • -400 - No Score – Missing Text Vectors: This number reflects the number of documents for which our system cannot locate text vectors. This can occur if something abnormal happened during the processing or if any vector files have changed location at any point.

  • -500 - Uncertain – No Model Features: When a document does have some metadata or text features but doesn’t have those features in the current run of the COSMIC classification model. In other words, if the model doesn’t have the set of features that a document has, we cannot score the document against the model so it gets marked as “Empty”.

Analytic Engine Linked Custom Field Metadata

Where additional analysis has been undertaken on the AI side and written to custom fields, those fields will be replicated in Reveal Review with a NexLP prefix to carry the data across the link.


Users with Prediction AI access permission (set in Project Admin > Tags) and others with AI access rights may log into the related database in the Artificial Intelligence module and further examine the analytical information.

To examine the Cluster referenced here, for example:

  • Open Artificial Intelligence under the Flyout Menu.

  • Login with your AI username and password.

  • Select the Storybook with the same name as your Reveal Review project.

  • To examine Clusters, open Entities on the dropdown menu and enter either the number of one of the keywords indicated in the Nexlp Cluster field.


More information on this topic will be found in the Reveal AI documentation.